Test Report: KVM_Linux_crio 19774

                    
                      95efbc930ecf4c942ef544a2e8709bfd2a544710:2024-10-08:36559
                    
                

Test fail (37/265)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.6
35 TestAddons/parallel/Ingress 151.45
37 TestAddons/parallel/MetricsServer 359.94
45 TestAddons/StoppedEnableDisable 154.36
115 TestFunctional/parallel/ImageCommands/ImageListShort 2.29
164 TestMultiControlPlane/serial/StopSecondaryNode 141.42
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.57
166 TestMultiControlPlane/serial/RestartSecondaryNode 6.51
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.24
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 797.02
169 TestMultiControlPlane/serial/DeleteSecondaryNode 10.93
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 5.29
171 TestMultiControlPlane/serial/StopCluster 176.13
231 TestMultiNode/serial/RestartKeepsNodes 317.49
233 TestMultiNode/serial/StopMultiNode 145.07
240 TestPreload 268.63
248 TestKubernetesUpgrade 521.82
254 TestPause/serial/SecondStartNoReconfiguration 76.47
284 TestStartStop/group/old-k8s-version/serial/FirstStart 275.45
292 TestStartStop/group/no-preload/serial/Stop 139.04
298 TestStartStop/group/embed-certs/serial/Stop 139.04
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.97
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 105.23
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 710.42
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.15
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.24
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.14
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.37
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 488.33
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 491.12
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 347.97
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 164.04
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 7200.058
x
+
TestAddons/serial/GCPAuth/PullSecret (480.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-738106 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-738106 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3f448f11-f3a9-40df-9e9d-182a9b287c9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-738106 -n addons-738106
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-08 17:43:52.026458521 +0000 UTC m=+625.515610711
addons_test.go:627: (dbg) Run:  kubectl --context addons-738106 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-738106 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-738106/192.168.39.48
Start Time:       Tue, 08 Oct 2024 17:35:51 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.22
IPs:
IP:  10.244.0.22
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62scb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-62scb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/busybox to addons-738106
Normal   Pulling    6m30s (x4 over 8m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m30s (x4 over 8m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m30s (x4 over 8m)   kubelet            Error: ErrImagePull
Warning  Failed     6m16s (x6 over 8m)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m46s (x21 over 8m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-738106 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-738106 logs busybox -n default: exit status 1 (67.817833ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-738106 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.60s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-738106 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-738106 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-738106 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [71eea7b8-ba54-449e-98bf-d99695b23e27] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [71eea7b8-ba54-449e-98bf-d99695b23e27] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.071589076s
I1008 17:44:34.450534  537013 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-738106 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.266146206s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-738106 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.48
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-738106 -n addons-738106
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 logs -n 25: (1.413269083s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| delete  | -p download-only-691270                                                                     | download-only-691270 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| delete  | -p download-only-463465                                                                     | download-only-463465 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| delete  | -p download-only-691270                                                                     | download-only-691270 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-340266 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | binary-mirror-340266                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46361                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-340266                                                                     | binary-mirror-340266 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | addons-738106                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | addons-738106                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-738106 --wait=true                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:35 UTC | 08 Oct 24 17:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:43 UTC | 08 Oct 24 17:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | -p addons-738106                                                                            |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-738106 ssh cat                                                                       | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | /opt/local-path-provisioner/pvc-d1d617de-cc0c-4dd9-bd33-d96d94d0bb04_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | -p addons-738106                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-738106 ip                                                                            | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-738106 ssh curl -s                                                                   | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:45 UTC | 08 Oct 24 17:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:45 UTC | 08 Oct 24 17:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-738106 ip                                                                            | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:46 UTC | 08 Oct 24 17:46 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:33:40
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:33:40.689136  537626 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:33:40.689243  537626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:40.689253  537626 out.go:358] Setting ErrFile to fd 2...
	I1008 17:33:40.689257  537626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:40.689439  537626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:33:40.690035  537626 out.go:352] Setting JSON to false
	I1008 17:33:40.691113  537626 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4573,"bootTime":1728404248,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:33:40.691166  537626 start.go:139] virtualization: kvm guest
	I1008 17:33:40.693113  537626 out.go:177] * [addons-738106] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:33:40.694247  537626 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:33:40.694258  537626 notify.go:220] Checking for updates...
	I1008 17:33:40.696466  537626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:33:40.697696  537626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:33:40.698797  537626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:40.699881  537626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:33:40.700901  537626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:33:40.702093  537626 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:33:40.734458  537626 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 17:33:40.735521  537626 start.go:297] selected driver: kvm2
	I1008 17:33:40.735533  537626 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:33:40.735546  537626 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:33:40.736284  537626 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:33:40.736383  537626 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:33:40.751491  537626 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:33:40.751537  537626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:33:40.751866  537626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:33:40.751908  537626 cni.go:84] Creating CNI manager for ""
	I1008 17:33:40.751965  537626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:33:40.751994  537626 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 17:33:40.752073  537626 start.go:340] cluster config:
	{Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:33:40.752182  537626 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:33:40.753671  537626 out.go:177] * Starting "addons-738106" primary control-plane node in "addons-738106" cluster
	I1008 17:33:40.754857  537626 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:33:40.754885  537626 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 17:33:40.754899  537626 cache.go:56] Caching tarball of preloaded images
	I1008 17:33:40.754982  537626 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:33:40.754993  537626 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:33:40.755281  537626 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/config.json ...
	I1008 17:33:40.755299  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/config.json: {Name:mk595d6258b4a439716133f21c17ed4f412fe4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:33:40.755417  537626 start.go:360] acquireMachinesLock for addons-738106: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:33:40.755456  537626 start.go:364] duration metric: took 27.241µs to acquireMachinesLock for "addons-738106"
	I1008 17:33:40.755471  537626 start.go:93] Provisioning new machine with config: &{Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:33:40.755514  537626 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 17:33:40.757184  537626 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1008 17:33:40.757315  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:33:40.757363  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:33:40.771499  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I1008 17:33:40.771962  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:33:40.772564  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:33:40.772584  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:33:40.772964  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:33:40.773146  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:33:40.773312  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:33:40.773438  537626 start.go:159] libmachine.API.Create for "addons-738106" (driver="kvm2")
	I1008 17:33:40.773461  537626 client.go:168] LocalClient.Create starting
	I1008 17:33:40.773491  537626 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:33:41.005943  537626 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:33:41.410209  537626 main.go:141] libmachine: Running pre-create checks...
	I1008 17:33:41.410243  537626 main.go:141] libmachine: (addons-738106) Calling .PreCreateCheck
	I1008 17:33:41.410770  537626 main.go:141] libmachine: (addons-738106) Calling .GetConfigRaw
	I1008 17:33:41.411220  537626 main.go:141] libmachine: Creating machine...
	I1008 17:33:41.411239  537626 main.go:141] libmachine: (addons-738106) Calling .Create
	I1008 17:33:41.411358  537626 main.go:141] libmachine: (addons-738106) Creating KVM machine...
	I1008 17:33:41.412593  537626 main.go:141] libmachine: (addons-738106) DBG | found existing default KVM network
	I1008 17:33:41.413370  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.413228  537648 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1008 17:33:41.413411  537626 main.go:141] libmachine: (addons-738106) DBG | created network xml: 
	I1008 17:33:41.413425  537626 main.go:141] libmachine: (addons-738106) DBG | <network>
	I1008 17:33:41.413438  537626 main.go:141] libmachine: (addons-738106) DBG |   <name>mk-addons-738106</name>
	I1008 17:33:41.413450  537626 main.go:141] libmachine: (addons-738106) DBG |   <dns enable='no'/>
	I1008 17:33:41.413462  537626 main.go:141] libmachine: (addons-738106) DBG |   
	I1008 17:33:41.413474  537626 main.go:141] libmachine: (addons-738106) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 17:33:41.413489  537626 main.go:141] libmachine: (addons-738106) DBG |     <dhcp>
	I1008 17:33:41.413509  537626 main.go:141] libmachine: (addons-738106) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 17:33:41.413521  537626 main.go:141] libmachine: (addons-738106) DBG |     </dhcp>
	I1008 17:33:41.413536  537626 main.go:141] libmachine: (addons-738106) DBG |   </ip>
	I1008 17:33:41.413601  537626 main.go:141] libmachine: (addons-738106) DBG |   
	I1008 17:33:41.413643  537626 main.go:141] libmachine: (addons-738106) DBG | </network>
	I1008 17:33:41.413708  537626 main.go:141] libmachine: (addons-738106) DBG | 
	I1008 17:33:41.418980  537626 main.go:141] libmachine: (addons-738106) DBG | trying to create private KVM network mk-addons-738106 192.168.39.0/24...
	I1008 17:33:41.482290  537626 main.go:141] libmachine: (addons-738106) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106 ...
	I1008 17:33:41.482354  537626 main.go:141] libmachine: (addons-738106) DBG | private KVM network mk-addons-738106 192.168.39.0/24 created
	I1008 17:33:41.482379  537626 main.go:141] libmachine: (addons-738106) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:33:41.482406  537626 main.go:141] libmachine: (addons-738106) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:33:41.482424  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.482195  537648 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:41.752140  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.752000  537648 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa...
	I1008 17:33:41.904597  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.904464  537648 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/addons-738106.rawdisk...
	I1008 17:33:41.904628  537626 main.go:141] libmachine: (addons-738106) DBG | Writing magic tar header
	I1008 17:33:41.904639  537626 main.go:141] libmachine: (addons-738106) DBG | Writing SSH key tar header
	I1008 17:33:41.904647  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.904590  537648 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106 ...
	I1008 17:33:41.904704  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106
	I1008 17:33:41.904749  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:33:41.904762  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106 (perms=drwx------)
	I1008 17:33:41.904768  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:41.904779  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:33:41.904785  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:33:41.904791  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:33:41.904802  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:33:41.904825  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:33:41.904836  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home
	I1008 17:33:41.904848  537626 main.go:141] libmachine: (addons-738106) DBG | Skipping /home - not owner
	I1008 17:33:41.904862  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:33:41.904871  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:33:41.904878  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:33:41.904886  537626 main.go:141] libmachine: (addons-738106) Creating domain...
	I1008 17:33:41.905829  537626 main.go:141] libmachine: (addons-738106) define libvirt domain using xml: 
	I1008 17:33:41.905865  537626 main.go:141] libmachine: (addons-738106) <domain type='kvm'>
	I1008 17:33:41.905876  537626 main.go:141] libmachine: (addons-738106)   <name>addons-738106</name>
	I1008 17:33:41.905888  537626 main.go:141] libmachine: (addons-738106)   <memory unit='MiB'>4000</memory>
	I1008 17:33:41.905900  537626 main.go:141] libmachine: (addons-738106)   <vcpu>2</vcpu>
	I1008 17:33:41.905909  537626 main.go:141] libmachine: (addons-738106)   <features>
	I1008 17:33:41.905920  537626 main.go:141] libmachine: (addons-738106)     <acpi/>
	I1008 17:33:41.905928  537626 main.go:141] libmachine: (addons-738106)     <apic/>
	I1008 17:33:41.905938  537626 main.go:141] libmachine: (addons-738106)     <pae/>
	I1008 17:33:41.905955  537626 main.go:141] libmachine: (addons-738106)     
	I1008 17:33:41.905965  537626 main.go:141] libmachine: (addons-738106)   </features>
	I1008 17:33:41.905977  537626 main.go:141] libmachine: (addons-738106)   <cpu mode='host-passthrough'>
	I1008 17:33:41.905992  537626 main.go:141] libmachine: (addons-738106)   
	I1008 17:33:41.906007  537626 main.go:141] libmachine: (addons-738106)   </cpu>
	I1008 17:33:41.906113  537626 main.go:141] libmachine: (addons-738106)   <os>
	I1008 17:33:41.906176  537626 main.go:141] libmachine: (addons-738106)     <type>hvm</type>
	I1008 17:33:41.906195  537626 main.go:141] libmachine: (addons-738106)     <boot dev='cdrom'/>
	I1008 17:33:41.906205  537626 main.go:141] libmachine: (addons-738106)     <boot dev='hd'/>
	I1008 17:33:41.906217  537626 main.go:141] libmachine: (addons-738106)     <bootmenu enable='no'/>
	I1008 17:33:41.906223  537626 main.go:141] libmachine: (addons-738106)   </os>
	I1008 17:33:41.906231  537626 main.go:141] libmachine: (addons-738106)   <devices>
	I1008 17:33:41.906242  537626 main.go:141] libmachine: (addons-738106)     <disk type='file' device='cdrom'>
	I1008 17:33:41.906260  537626 main.go:141] libmachine: (addons-738106)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/boot2docker.iso'/>
	I1008 17:33:41.906271  537626 main.go:141] libmachine: (addons-738106)       <target dev='hdc' bus='scsi'/>
	I1008 17:33:41.906282  537626 main.go:141] libmachine: (addons-738106)       <readonly/>
	I1008 17:33:41.906291  537626 main.go:141] libmachine: (addons-738106)     </disk>
	I1008 17:33:41.906303  537626 main.go:141] libmachine: (addons-738106)     <disk type='file' device='disk'>
	I1008 17:33:41.906314  537626 main.go:141] libmachine: (addons-738106)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:33:41.906364  537626 main.go:141] libmachine: (addons-738106)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/addons-738106.rawdisk'/>
	I1008 17:33:41.906384  537626 main.go:141] libmachine: (addons-738106)       <target dev='hda' bus='virtio'/>
	I1008 17:33:41.906393  537626 main.go:141] libmachine: (addons-738106)     </disk>
	I1008 17:33:41.906398  537626 main.go:141] libmachine: (addons-738106)     <interface type='network'>
	I1008 17:33:41.906407  537626 main.go:141] libmachine: (addons-738106)       <source network='mk-addons-738106'/>
	I1008 17:33:41.906412  537626 main.go:141] libmachine: (addons-738106)       <model type='virtio'/>
	I1008 17:33:41.906418  537626 main.go:141] libmachine: (addons-738106)     </interface>
	I1008 17:33:41.906422  537626 main.go:141] libmachine: (addons-738106)     <interface type='network'>
	I1008 17:33:41.906430  537626 main.go:141] libmachine: (addons-738106)       <source network='default'/>
	I1008 17:33:41.906437  537626 main.go:141] libmachine: (addons-738106)       <model type='virtio'/>
	I1008 17:33:41.906443  537626 main.go:141] libmachine: (addons-738106)     </interface>
	I1008 17:33:41.906460  537626 main.go:141] libmachine: (addons-738106)     <serial type='pty'>
	I1008 17:33:41.906468  537626 main.go:141] libmachine: (addons-738106)       <target port='0'/>
	I1008 17:33:41.906473  537626 main.go:141] libmachine: (addons-738106)     </serial>
	I1008 17:33:41.906483  537626 main.go:141] libmachine: (addons-738106)     <console type='pty'>
	I1008 17:33:41.906488  537626 main.go:141] libmachine: (addons-738106)       <target type='serial' port='0'/>
	I1008 17:33:41.906493  537626 main.go:141] libmachine: (addons-738106)     </console>
	I1008 17:33:41.906499  537626 main.go:141] libmachine: (addons-738106)     <rng model='virtio'>
	I1008 17:33:41.906505  537626 main.go:141] libmachine: (addons-738106)       <backend model='random'>/dev/random</backend>
	I1008 17:33:41.906511  537626 main.go:141] libmachine: (addons-738106)     </rng>
	I1008 17:33:41.906516  537626 main.go:141] libmachine: (addons-738106)     
	I1008 17:33:41.906520  537626 main.go:141] libmachine: (addons-738106)     
	I1008 17:33:41.906525  537626 main.go:141] libmachine: (addons-738106)   </devices>
	I1008 17:33:41.906530  537626 main.go:141] libmachine: (addons-738106) </domain>
	I1008 17:33:41.906538  537626 main.go:141] libmachine: (addons-738106) 
	I1008 17:33:41.912401  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:55:8d:d9 in network default
	I1008 17:33:41.912967  537626 main.go:141] libmachine: (addons-738106) Ensuring networks are active...
	I1008 17:33:41.912984  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:41.913739  537626 main.go:141] libmachine: (addons-738106) Ensuring network default is active
	I1008 17:33:41.914048  537626 main.go:141] libmachine: (addons-738106) Ensuring network mk-addons-738106 is active
	I1008 17:33:41.914535  537626 main.go:141] libmachine: (addons-738106) Getting domain xml...
	I1008 17:33:41.915123  537626 main.go:141] libmachine: (addons-738106) Creating domain...
	I1008 17:33:43.279998  537626 main.go:141] libmachine: (addons-738106) Waiting to get IP...
	I1008 17:33:43.280697  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:43.281638  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:43.281778  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:43.281662  537648 retry.go:31] will retry after 280.838427ms: waiting for machine to come up
	I1008 17:33:43.563864  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:43.564296  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:43.564318  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:43.564251  537648 retry.go:31] will retry after 296.09476ms: waiting for machine to come up
	I1008 17:33:43.861843  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:43.862339  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:43.862368  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:43.862270  537648 retry.go:31] will retry after 332.461301ms: waiting for machine to come up
	I1008 17:33:44.196957  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:44.197420  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:44.197448  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:44.197391  537648 retry.go:31] will retry after 526.383574ms: waiting for machine to come up
	I1008 17:33:44.725015  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:44.725401  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:44.725429  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:44.725345  537648 retry.go:31] will retry after 538.672431ms: waiting for machine to come up
	I1008 17:33:45.266158  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:45.266580  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:45.266610  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:45.266527  537648 retry.go:31] will retry after 900.712695ms: waiting for machine to come up
	I1008 17:33:46.169489  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:46.169891  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:46.169923  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:46.169834  537648 retry.go:31] will retry after 1.143660308s: waiting for machine to come up
	I1008 17:33:47.315050  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:47.315428  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:47.315460  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:47.315375  537648 retry.go:31] will retry after 1.073047933s: waiting for machine to come up
	I1008 17:33:48.390588  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:48.390944  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:48.390988  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:48.390915  537648 retry.go:31] will retry after 1.696404496s: waiting for machine to come up
	I1008 17:33:50.089745  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:50.090140  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:50.090162  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:50.090101  537648 retry.go:31] will retry after 1.509226141s: waiting for machine to come up
	I1008 17:33:51.600783  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:51.601284  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:51.601315  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:51.601244  537648 retry.go:31] will retry after 1.977893914s: waiting for machine to come up
	I1008 17:33:53.581353  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:53.581850  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:53.581879  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:53.581798  537648 retry.go:31] will retry after 2.977291089s: waiting for machine to come up
	I1008 17:33:56.560180  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:56.560606  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:56.560635  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:56.560558  537648 retry.go:31] will retry after 3.871394004s: waiting for machine to come up
	I1008 17:34:00.433827  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:00.434188  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:34:00.434229  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:34:00.434156  537648 retry.go:31] will retry after 4.107122672s: waiting for machine to come up
	I1008 17:34:04.545293  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.545823  537626 main.go:141] libmachine: (addons-738106) Found IP for machine: 192.168.39.48
	I1008 17:34:04.545842  537626 main.go:141] libmachine: (addons-738106) Reserving static IP address...
	I1008 17:34:04.545854  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has current primary IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.546187  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find host DHCP lease matching {name: "addons-738106", mac: "52:54:00:4c:47:63", ip: "192.168.39.48"} in network mk-addons-738106
	I1008 17:34:04.614821  537626 main.go:141] libmachine: (addons-738106) DBG | Getting to WaitForSSH function...
	I1008 17:34:04.614857  537626 main.go:141] libmachine: (addons-738106) Reserved static IP address: 192.168.39.48
	I1008 17:34:04.614870  537626 main.go:141] libmachine: (addons-738106) Waiting for SSH to be available...
	I1008 17:34:04.617262  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.617651  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.617688  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.617832  537626 main.go:141] libmachine: (addons-738106) DBG | Using SSH client type: external
	I1008 17:34:04.617857  537626 main.go:141] libmachine: (addons-738106) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa (-rw-------)
	I1008 17:34:04.617889  537626 main.go:141] libmachine: (addons-738106) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:34:04.617908  537626 main.go:141] libmachine: (addons-738106) DBG | About to run SSH command:
	I1008 17:34:04.617943  537626 main.go:141] libmachine: (addons-738106) DBG | exit 0
	I1008 17:34:04.746027  537626 main.go:141] libmachine: (addons-738106) DBG | SSH cmd err, output: <nil>: 
	I1008 17:34:04.746254  537626 main.go:141] libmachine: (addons-738106) KVM machine creation complete!
	I1008 17:34:04.746651  537626 main.go:141] libmachine: (addons-738106) Calling .GetConfigRaw
	I1008 17:34:04.747217  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:04.747408  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:04.747593  537626 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:34:04.747611  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:04.748868  537626 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:34:04.748883  537626 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:34:04.748891  537626 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:34:04.748899  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:04.750925  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.751259  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.751291  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.751400  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:04.751603  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.751761  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.751899  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:04.752053  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:04.752290  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:04.752304  537626 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:34:04.853192  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:34:04.853219  537626 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:34:04.853227  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:04.855866  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.856174  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.856214  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.856387  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:04.856566  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.856732  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.856912  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:04.857062  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:04.857277  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:04.857293  537626 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:34:04.958748  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:34:04.958839  537626 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:34:04.958854  537626 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:34:04.958869  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:34:04.959108  537626 buildroot.go:166] provisioning hostname "addons-738106"
	I1008 17:34:04.959134  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:34:04.959328  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:04.961843  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.962210  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.962244  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.962401  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:04.962557  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.962687  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.962791  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:04.962903  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:04.963117  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:04.963135  537626 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-738106 && echo "addons-738106" | sudo tee /etc/hostname
	I1008 17:34:05.075384  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-738106
	
	I1008 17:34:05.075419  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.077767  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.078103  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.078131  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.078311  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.078501  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.078663  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.078743  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.078877  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:05.079079  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:05.079096  537626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-738106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-738106/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-738106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:34:05.186157  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:34:05.186192  537626 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:34:05.186225  537626 buildroot.go:174] setting up certificates
	I1008 17:34:05.186240  537626 provision.go:84] configureAuth start
	I1008 17:34:05.186255  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:34:05.186545  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:05.189184  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.189567  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.189606  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.189693  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.191890  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.192196  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.192221  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.192357  537626 provision.go:143] copyHostCerts
	I1008 17:34:05.192436  537626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:34:05.192558  537626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:34:05.192617  537626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:34:05.192695  537626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.addons-738106 san=[127.0.0.1 192.168.39.48 addons-738106 localhost minikube]
	I1008 17:34:05.349238  537626 provision.go:177] copyRemoteCerts
	I1008 17:34:05.349305  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:34:05.349332  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.352101  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.352407  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.352435  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.352609  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.352768  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.352908  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.353013  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.432190  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:34:05.454658  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:34:05.476764  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:34:05.498637  537626 provision.go:87] duration metric: took 312.381796ms to configureAuth
	I1008 17:34:05.498661  537626 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:34:05.498828  537626 config.go:182] Loaded profile config "addons-738106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:34:05.498928  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.501510  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.501847  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.501879  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.502016  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.502201  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.502352  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.502489  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.502695  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:05.502859  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:05.502874  537626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:34:05.711457  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:34:05.711491  537626 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:34:05.711500  537626 main.go:141] libmachine: (addons-738106) Calling .GetURL
	I1008 17:34:05.712784  537626 main.go:141] libmachine: (addons-738106) DBG | Using libvirt version 6000000
	I1008 17:34:05.715240  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.715550  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.715575  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.715711  537626 main.go:141] libmachine: Docker is up and running!
	I1008 17:34:05.715722  537626 main.go:141] libmachine: Reticulating splines...
	I1008 17:34:05.715731  537626 client.go:171] duration metric: took 24.942259489s to LocalClient.Create
	I1008 17:34:05.715755  537626 start.go:167] duration metric: took 24.942316943s to libmachine.API.Create "addons-738106"
	I1008 17:34:05.715768  537626 start.go:293] postStartSetup for "addons-738106" (driver="kvm2")
	I1008 17:34:05.715782  537626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:34:05.715802  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.716060  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:34:05.716097  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.718151  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.718501  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.718530  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.718687  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.718861  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.719030  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.719174  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.800698  537626 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:34:05.804645  537626 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:34:05.804672  537626 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:34:05.804747  537626 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:34:05.804769  537626 start.go:296] duration metric: took 88.995336ms for postStartSetup
	I1008 17:34:05.804817  537626 main.go:141] libmachine: (addons-738106) Calling .GetConfigRaw
	I1008 17:34:05.805432  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:05.807893  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.808299  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.808326  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.808574  537626 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/config.json ...
	I1008 17:34:05.808776  537626 start.go:128] duration metric: took 25.053251682s to createHost
	I1008 17:34:05.808805  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.811112  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.811413  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.811439  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.811627  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.811791  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.811960  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.812118  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.812258  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:05.812429  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:05.812439  537626 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:34:05.910631  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728408845.886453669
	
	I1008 17:34:05.910659  537626 fix.go:216] guest clock: 1728408845.886453669
	I1008 17:34:05.910669  537626 fix.go:229] Guest: 2024-10-08 17:34:05.886453669 +0000 UTC Remote: 2024-10-08 17:34:05.80879367 +0000 UTC m=+25.157788476 (delta=77.659999ms)
	I1008 17:34:05.910691  537626 fix.go:200] guest clock delta is within tolerance: 77.659999ms
	I1008 17:34:05.910697  537626 start.go:83] releasing machines lock for "addons-738106", held for 25.155232261s
	I1008 17:34:05.910725  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.911029  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:05.913440  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.913748  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.913774  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.913968  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.914426  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.914581  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.914689  537626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:34:05.914737  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.914775  537626 ssh_runner.go:195] Run: cat /version.json
	I1008 17:34:05.914803  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.917231  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917497  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917612  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.917644  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917884  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.917910  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917936  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.918065  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.918118  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.918268  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.918285  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.918421  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.918436  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.918570  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.991240  537626 ssh_runner.go:195] Run: systemctl --version
	I1008 17:34:06.014066  537626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:34:06.170003  537626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:34:06.176190  537626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:34:06.176269  537626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:34:06.192224  537626 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:34:06.192243  537626 start.go:495] detecting cgroup driver to use...
	I1008 17:34:06.192307  537626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:34:06.208351  537626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:34:06.221631  537626 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:34:06.221735  537626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:34:06.234985  537626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:34:06.247848  537626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:34:06.361058  537626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:34:06.510411  537626 docker.go:233] disabling docker service ...
	I1008 17:34:06.510505  537626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:34:06.523563  537626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:34:06.536132  537626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:34:06.651508  537626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:34:06.764205  537626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:34:06.777440  537626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:34:06.795381  537626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:34:06.795459  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.805419  537626 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:34:06.805488  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.815187  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.824538  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.833890  537626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:34:06.843452  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.852855  537626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.868678  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.878074  537626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:34:06.886541  537626 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:34:06.886583  537626 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:34:06.898471  537626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:34:06.906877  537626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:34:07.020732  537626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:34:07.114574  537626 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:34:07.114655  537626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:34:07.119392  537626 start.go:563] Will wait 60s for crictl version
	I1008 17:34:07.119450  537626 ssh_runner.go:195] Run: which crictl
	I1008 17:34:07.123099  537626 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:34:07.168996  537626 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:34:07.169095  537626 ssh_runner.go:195] Run: crio --version
	I1008 17:34:07.200572  537626 ssh_runner.go:195] Run: crio --version
	I1008 17:34:07.228827  537626 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:34:07.230289  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:07.232823  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:07.233181  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:07.233212  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:07.233381  537626 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:34:07.237443  537626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:34:07.249431  537626 kubeadm.go:883] updating cluster {Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 17:34:07.249554  537626 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:34:07.249617  537626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:34:07.279917  537626 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 17:34:07.279996  537626 ssh_runner.go:195] Run: which lz4
	I1008 17:34:07.283943  537626 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 17:34:07.287802  537626 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 17:34:07.287824  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 17:34:08.497093  537626 crio.go:462] duration metric: took 1.213200062s to copy over tarball
	I1008 17:34:08.497163  537626 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 17:34:10.559838  537626 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.062646653s)
	I1008 17:34:10.559874  537626 crio.go:469] duration metric: took 2.062749764s to extract the tarball
	I1008 17:34:10.559885  537626 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 17:34:10.596900  537626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:34:10.636232  537626 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 17:34:10.636259  537626 cache_images.go:84] Images are preloaded, skipping loading
	I1008 17:34:10.636298  537626 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.1 crio true true} ...
	I1008 17:34:10.636438  537626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-738106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:34:10.636529  537626 ssh_runner.go:195] Run: crio config
	I1008 17:34:10.680707  537626 cni.go:84] Creating CNI manager for ""
	I1008 17:34:10.680732  537626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:34:10.680757  537626 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 17:34:10.680791  537626 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-738106 NodeName:addons-738106 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 17:34:10.680942  537626 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-738106"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 17:34:10.681020  537626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:34:10.690845  537626 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 17:34:10.690917  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 17:34:10.700048  537626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 17:34:10.716022  537626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:34:10.731674  537626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1008 17:34:10.747105  537626 ssh_runner.go:195] Run: grep 192.168.39.48	control-plane.minikube.internal$ /etc/hosts
	I1008 17:34:10.750695  537626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:34:10.762251  537626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:34:10.873308  537626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:34:10.890510  537626 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106 for IP: 192.168.39.48
	I1008 17:34:10.890544  537626 certs.go:194] generating shared ca certs ...
	I1008 17:34:10.890579  537626 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:10.890758  537626 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:34:10.976005  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt ...
	I1008 17:34:10.976040  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt: {Name:mk2e03f13a61c15f4a04d301f8782221fad00d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:10.976213  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key ...
	I1008 17:34:10.976224  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key: {Name:mk3e6571165dc2f41e24b21c47ec4b378152c3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:10.976294  537626 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:34:11.070506  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt ...
	I1008 17:34:11.070539  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt: {Name:mkbdb588abd4e5f892ee88285210baf17ac68d59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.070694  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key ...
	I1008 17:34:11.070707  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key: {Name:mk2b9c3a9084dcbe12cc25abe16ba6ffe6e02f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.070777  537626 certs.go:256] generating profile certs ...
	I1008 17:34:11.070834  537626 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.key
	I1008 17:34:11.070857  537626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt with IP's: []
	I1008 17:34:11.127410  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt ...
	I1008 17:34:11.127442  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: {Name:mk059e29262c9e19b9ef00ba4b05c9a99e65ddfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.127592  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.key ...
	I1008 17:34:11.127602  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.key: {Name:mk3262ac206d5297ba8efeeb5c541edbb0aa34f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.127668  537626 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5
	I1008 17:34:11.127686  537626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48]
	I1008 17:34:11.409390  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5 ...
	I1008 17:34:11.409429  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5: {Name:mk5d3287da65c1ac0657d6c2bda0130ed40c5006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.409605  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5 ...
	I1008 17:34:11.409618  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5: {Name:mk39fdd4deaa631d7548b40f45b39a8aec584738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.409699  537626 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt
	I1008 17:34:11.409789  537626 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key
	I1008 17:34:11.409835  537626 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key
	I1008 17:34:11.409854  537626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt with IP's: []
	I1008 17:34:11.473382  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt ...
	I1008 17:34:11.473413  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt: {Name:mke96fb5cd120bb380ed9b3bc0b2f6a63aba040f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.473571  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key ...
	I1008 17:34:11.473585  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key: {Name:mkf937bb5f1d51c1f200451b4b42e7fde440243a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.473747  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:34:11.473781  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:34:11.473808  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:34:11.473830  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:34:11.474443  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:34:11.498473  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:34:11.520378  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:34:11.542036  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:34:11.572645  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 17:34:11.607203  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 17:34:11.629739  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:34:11.651481  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:34:11.673055  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:34:11.694095  537626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 17:34:11.709388  537626 ssh_runner.go:195] Run: openssl version
	I1008 17:34:11.714710  537626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:34:11.724661  537626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:34:11.728719  537626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:34:11.728778  537626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:34:11.734186  537626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:34:11.743910  537626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:34:11.747766  537626 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:34:11.747820  537626 kubeadm.go:392] StartCluster: {Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:34:11.747896  537626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 17:34:11.747958  537626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 17:34:11.781773  537626 cri.go:89] found id: ""
	I1008 17:34:11.781859  537626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 17:34:11.791551  537626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 17:34:11.801184  537626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 17:34:11.810432  537626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 17:34:11.810456  537626 kubeadm.go:157] found existing configuration files:
	
	I1008 17:34:11.810506  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 17:34:11.819190  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 17:34:11.819271  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 17:34:11.828414  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 17:34:11.837327  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 17:34:11.837396  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 17:34:11.846202  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 17:34:11.854625  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 17:34:11.854668  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 17:34:11.863149  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 17:34:11.871421  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 17:34:11.871469  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 17:34:11.880164  537626 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 17:34:11.929272  537626 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 17:34:11.929470  537626 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 17:34:12.031679  537626 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 17:34:12.031811  537626 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 17:34:12.031952  537626 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 17:34:12.043199  537626 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 17:34:12.045397  537626 out.go:235]   - Generating certificates and keys ...
	I1008 17:34:12.045520  537626 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 17:34:12.045632  537626 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 17:34:12.089991  537626 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 17:34:12.400933  537626 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 17:34:12.447240  537626 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 17:34:12.575099  537626 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 17:34:12.770280  537626 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 17:34:12.770473  537626 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-738106 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1008 17:34:12.871630  537626 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 17:34:12.871919  537626 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-738106 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1008 17:34:12.966016  537626 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 17:34:13.568473  537626 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 17:34:13.679771  537626 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 17:34:13.680026  537626 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 17:34:13.875389  537626 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 17:34:13.996093  537626 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 17:34:14.196895  537626 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 17:34:14.370849  537626 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 17:34:14.486072  537626 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 17:34:14.486751  537626 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 17:34:14.489256  537626 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 17:34:14.490956  537626 out.go:235]   - Booting up control plane ...
	I1008 17:34:14.491039  537626 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 17:34:14.491109  537626 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 17:34:14.491627  537626 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 17:34:14.507235  537626 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 17:34:14.513825  537626 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 17:34:14.513894  537626 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 17:34:14.648900  537626 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 17:34:14.649067  537626 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 17:34:15.150335  537626 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.811507ms
	I1008 17:34:15.150438  537626 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 17:34:20.648700  537626 kubeadm.go:310] [api-check] The API server is healthy after 5.501451413s
	I1008 17:34:20.668633  537626 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 17:34:20.678297  537626 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 17:34:20.703405  537626 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 17:34:20.703601  537626 kubeadm.go:310] [mark-control-plane] Marking the node addons-738106 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 17:34:20.713827  537626 kubeadm.go:310] [bootstrap-token] Using token: ijcjf0.l7d52rdo1tzhu6v1
	I1008 17:34:20.715143  537626 out.go:235]   - Configuring RBAC rules ...
	I1008 17:34:20.715273  537626 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 17:34:20.723107  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 17:34:20.730590  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 17:34:20.733668  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 17:34:20.736581  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 17:34:20.740820  537626 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 17:34:21.055724  537626 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 17:34:21.512687  537626 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 17:34:22.053335  537626 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 17:34:22.054193  537626 kubeadm.go:310] 
	I1008 17:34:22.054266  537626 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 17:34:22.054273  537626 kubeadm.go:310] 
	I1008 17:34:22.054371  537626 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 17:34:22.054399  537626 kubeadm.go:310] 
	I1008 17:34:22.054453  537626 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 17:34:22.054540  537626 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 17:34:22.054631  537626 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 17:34:22.054652  537626 kubeadm.go:310] 
	I1008 17:34:22.054730  537626 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 17:34:22.054766  537626 kubeadm.go:310] 
	I1008 17:34:22.054854  537626 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 17:34:22.054870  537626 kubeadm.go:310] 
	I1008 17:34:22.054949  537626 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 17:34:22.055091  537626 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 17:34:22.055213  537626 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 17:34:22.055231  537626 kubeadm.go:310] 
	I1008 17:34:22.055348  537626 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 17:34:22.055451  537626 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 17:34:22.055462  537626 kubeadm.go:310] 
	I1008 17:34:22.055583  537626 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ijcjf0.l7d52rdo1tzhu6v1 \
	I1008 17:34:22.055722  537626 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 17:34:22.055761  537626 kubeadm.go:310] 	--control-plane 
	I1008 17:34:22.055770  537626 kubeadm.go:310] 
	I1008 17:34:22.055900  537626 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 17:34:22.055919  537626 kubeadm.go:310] 
	I1008 17:34:22.056022  537626 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ijcjf0.l7d52rdo1tzhu6v1 \
	I1008 17:34:22.056163  537626 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 17:34:22.057373  537626 kubeadm.go:310] W1008 17:34:11.909094     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:34:22.057626  537626 kubeadm.go:310] W1008 17:34:11.909971     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:34:22.057741  537626 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 17:34:22.057782  537626 cni.go:84] Creating CNI manager for ""
	I1008 17:34:22.057796  537626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:34:22.059536  537626 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 17:34:22.060703  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 17:34:22.071101  537626 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 17:34:22.094582  537626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 17:34:22.094680  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:22.094692  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-738106 minikube.k8s.io/updated_at=2024_10_08T17_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=addons-738106 minikube.k8s.io/primary=true
	I1008 17:34:22.237739  537626 ops.go:34] apiserver oom_adj: -16
	I1008 17:34:22.237907  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:22.738839  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:23.238428  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:23.738842  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:24.238435  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:24.738905  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:25.238799  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:25.738870  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:26.238792  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:26.738587  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:26.849091  537626 kubeadm.go:1113] duration metric: took 4.754486007s to wait for elevateKubeSystemPrivileges
	I1008 17:34:26.849135  537626 kubeadm.go:394] duration metric: took 15.101320067s to StartCluster
	I1008 17:34:26.849161  537626 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:26.849312  537626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:34:26.849837  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:26.850093  537626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 17:34:26.850086  537626 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:34:26.850113  537626 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1008 17:34:26.850239  537626 addons.go:69] Setting yakd=true in profile "addons-738106"
	I1008 17:34:26.850278  537626 addons.go:69] Setting registry=true in profile "addons-738106"
	I1008 17:34:26.850289  537626 addons.go:69] Setting ingress=true in profile "addons-738106"
	I1008 17:34:26.850294  537626 addons.go:234] Setting addon yakd=true in "addons-738106"
	I1008 17:34:26.850303  537626 addons.go:234] Setting addon registry=true in "addons-738106"
	I1008 17:34:26.850306  537626 config.go:182] Loaded profile config "addons-738106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:34:26.850312  537626 addons.go:69] Setting ingress-dns=true in profile "addons-738106"
	I1008 17:34:26.850333  537626 addons.go:234] Setting addon ingress-dns=true in "addons-738106"
	I1008 17:34:26.850342  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850354  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850366  537626 addons.go:69] Setting volcano=true in profile "addons-738106"
	I1008 17:34:26.850377  537626 addons.go:234] Setting addon volcano=true in "addons-738106"
	I1008 17:34:26.850388  537626 addons.go:69] Setting inspektor-gadget=true in profile "addons-738106"
	I1008 17:34:26.850400  537626 addons.go:234] Setting addon inspektor-gadget=true in "addons-738106"
	I1008 17:34:26.850379  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850424  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850484  537626 addons.go:69] Setting volumesnapshots=true in profile "addons-738106"
	I1008 17:34:26.850508  537626 addons.go:234] Setting addon volumesnapshots=true in "addons-738106"
	I1008 17:34:26.850542  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850861  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.850252  537626 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-738106"
	I1008 17:34:26.850880  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.850885  537626 addons.go:69] Setting storage-provisioner=true in profile "addons-738106"
	I1008 17:34:26.850404  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850900  537626 addons.go:234] Setting addon storage-provisioner=true in "addons-738106"
	I1008 17:34:26.850907  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850915  537626 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-738106"
	I1008 17:34:26.850922  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.850929  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850922  537626 addons.go:69] Setting metrics-server=true in profile "addons-738106"
	I1008 17:34:26.850251  537626 addons.go:69] Setting cloud-spanner=true in profile "addons-738106"
	I1008 17:34:26.850951  537626 addons.go:234] Setting addon metrics-server=true in "addons-738106"
	I1008 17:34:26.850955  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850957  537626 addons.go:234] Setting addon cloud-spanner=true in "addons-738106"
	I1008 17:34:26.850937  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851014  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851138  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851248  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851267  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851293  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850272  537626 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-738106"
	I1008 17:34:26.851314  537626 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-738106"
	I1008 17:34:26.850869  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851337  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851299  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851346  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850281  537626 addons.go:69] Setting gcp-auth=true in profile "addons-738106"
	I1008 17:34:26.850240  537626 addons.go:69] Setting default-storageclass=true in profile "addons-738106"
	I1008 17:34:26.851422  537626 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-738106"
	I1008 17:34:26.851424  537626 mustload.go:65] Loading cluster: addons-738106
	I1008 17:34:26.850267  537626 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-738106"
	I1008 17:34:26.851441  537626 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-738106"
	I1008 17:34:26.851500  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851601  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851661  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851718  537626 config.go:182] Loaded profile config "addons-738106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:34:26.851312  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851770  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850304  537626 addons.go:234] Setting addon ingress=true in "addons-738106"
	I1008 17:34:26.851873  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851910  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851968  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851989  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852039  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852047  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852062  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852082  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852095  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852118  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852125  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852131  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.852154  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852285  537626 out.go:177] * Verifying Kubernetes components...
	I1008 17:34:26.853752  537626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:34:26.871278  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I1008 17:34:26.871325  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 17:34:26.871547  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I1008 17:34:26.871550  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I1008 17:34:26.871998  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872063  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872193  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872562  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.872583  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.872691  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872916  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.872966  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.872997  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.873148  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.873161  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.873597  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.873626  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.873783  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.873842  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I1008 17:34:26.886884  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.886932  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.888123  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.888164  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.888199  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.888360  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.888382  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.888473  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I1008 17:34:26.889034  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.889048  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.889075  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.889117  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.889199  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.889599  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.889618  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.890023  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.890050  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.890541  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.890567  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.890655  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.891173  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.891215  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.891420  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.898767  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.898813  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.918340  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I1008 17:34:26.919105  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.919994  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.920019  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.920462  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.920521  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1008 17:34:26.920851  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.921075  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.921662  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.921680  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.922081  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.922413  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.924131  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I1008 17:34:26.924372  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.924690  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.925509  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.925530  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.926007  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.926471  537626 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1008 17:34:26.926645  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.926710  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I1008 17:34:26.927861  537626 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1008 17:34:26.927886  537626 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1008 17:34:26.927916  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.928808  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I1008 17:34:26.928817  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I1008 17:34:26.928842  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1008 17:34:26.928862  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.928808  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.929261  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.929308  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.929342  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.929761  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.929788  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.929805  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.929855  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.929869  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.930195  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.930258  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.930270  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.930282  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.930683  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.930716  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.930787  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.930804  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.931394  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.931406  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.931464  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.931854  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.931896  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.932093  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1008 17:34:26.932791  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.932829  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.932947  537626 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1008 17:34:26.932988  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.933832  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.933869  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.934087  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.934112  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.934128  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.934247  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.934259  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.934330  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.934380  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I1008 17:34:26.934527  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.934680  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.934742  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.934812  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.935117  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.935255  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.935287  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.935529  537626 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 17:34:26.935547  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1008 17:34:26.935566  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.935662  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.935910  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.937528  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.940087  537626 addons.go:234] Setting addon default-storageclass=true in "addons-738106"
	I1008 17:34:26.940136  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.940491  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.940523  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.941770  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1008 17:34:26.941919  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I1008 17:34:26.942071  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.942607  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.942553  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.942750  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.942945  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1008 17:34:26.942962  537626 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1008 17:34:26.942982  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.943615  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.943640  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.943653  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.943806  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.943913  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.943952  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.943987  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.944217  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.944751  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.945023  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.946305  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.946724  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.946751  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.946999  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.947340  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.947488  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.947657  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.949912  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I1008 17:34:26.952200  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I1008 17:34:26.952847  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.953389  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.953408  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.954468  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.954737  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.956357  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.957235  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I1008 17:34:26.957412  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38913
	I1008 17:34:26.957933  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.958411  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.958416  537626 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 17:34:26.958605  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.958630  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.958895  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.958915  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.959129  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.959192  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I1008 17:34:26.959501  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.959858  537626 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:34:26.959878  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 17:34:26.959896  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.960343  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.960909  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.960929  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.961427  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.961763  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.962383  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1008 17:34:26.962879  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.963331  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.963348  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.963703  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.963889  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.964095  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.965434  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I1008 17:34:26.965459  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:26.965493  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:26.965437  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.965849  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.965942  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:26.965947  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:26.965957  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:26.965965  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:26.965971  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:26.965973  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.965988  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.966139  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.966221  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:26.966250  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:26.966257  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	W1008 17:34:26.966381  537626 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1008 17:34:26.966709  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.966746  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.967026  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.967029  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.967064  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.968363  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.968374  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.968420  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.968565  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.968760  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.968835  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.969949  537626 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1008 17:34:26.971014  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 17:34:26.971035  537626 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 17:34:26.971056  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.971139  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.971199  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
	I1008 17:34:26.971700  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.972244  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.972261  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.972619  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1008 17:34:26.972766  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.973056  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.973075  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I1008 17:34:26.973669  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.974189  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.974206  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.974715  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.974858  537626 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-738106"
	I1008 17:34:26.974909  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.975078  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.975266  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.975311  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.975382  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1008 17:34:26.976013  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.976114  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.976622  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.976709  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.976892  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.976909  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.976936  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.977027  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.977085  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.977524  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I1008 17:34:26.977531  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.977554  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1008 17:34:26.978070  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.978243  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.978257  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.978980  537626 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1008 17:34:26.979021  537626 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1008 17:34:26.979036  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.979052  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.979082  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.979449  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.979499  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.979533  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.980017  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1008 17:34:26.980025  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.980057  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.980162  537626 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 17:34:26.980175  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1008 17:34:26.980193  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.980533  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1008 17:34:26.980554  537626 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1008 17:34:26.980567  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.982185  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1008 17:34:26.983244  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1008 17:34:26.983826  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.983847  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I1008 17:34:26.984222  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.984532  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.984555  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.984568  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.984638  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.984654  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.984741  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.984922  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.985095  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.985132  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.985151  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.985097  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.985200  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1008 17:34:26.985306  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.985599  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.985606  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.985775  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.985951  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.986051  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.987294  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1008 17:34:26.987404  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.988257  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1008 17:34:26.988276  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1008 17:34:26.988294  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.989042  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 17:34:26.990243  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1008 17:34:26.991190  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.991631  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.991667  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.991829  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.992001  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.992125  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.992231  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.992521  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 17:34:26.993913  537626 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 17:34:26.993937  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1008 17:34:26.993954  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.997179  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.997681  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.997701  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.997890  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.998059  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.998182  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.998335  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.000017  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I1008 17:34:27.000544  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.001144  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.001163  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.001442  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.001662  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.002966  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39145
	I1008 17:34:27.003151  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I1008 17:34:27.003338  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.003424  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.003519  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.003888  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.003915  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.004338  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.004468  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.004505  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.004914  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:27.004962  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:27.004985  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.005163  537626 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1008 17:34:27.006443  537626 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1008 17:34:27.006461  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1008 17:34:27.006479  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.006529  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.006712  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I1008 17:34:27.007878  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.008747  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.008763  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.008789  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.009569  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.009660  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.009884  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.010013  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.010101  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.010183  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.010383  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.010386  537626 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1008 17:34:27.010507  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.010597  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.012224  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.012440  537626 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 17:34:27.012458  537626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 17:34:27.012477  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.012749  537626 out.go:177]   - Using image docker.io/registry:2.8.3
	I1008 17:34:27.014480  537626 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1008 17:34:27.014502  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1008 17:34:27.014516  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.016259  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.016823  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.016844  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.017001  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.017166  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.017256  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.017343  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.018260  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	W1008 17:34:27.018387  537626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51170->192.168.39.48:22: read: connection reset by peer
	I1008 17:34:27.018413  537626 retry.go:31] will retry after 153.104938ms: ssh: handshake failed: read tcp 192.168.39.1:51170->192.168.39.48:22: read: connection reset by peer
	I1008 17:34:27.018719  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.018732  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.018942  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.019084  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.019203  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.019488  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.024210  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35987
	I1008 17:34:27.024622  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.025188  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.025203  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.025538  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.025714  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.027141  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.028943  537626 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1008 17:34:27.030072  537626 out.go:177]   - Using image docker.io/busybox:stable
	I1008 17:34:27.031093  537626 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 17:34:27.031110  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1008 17:34:27.031124  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.033661  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.033959  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.033983  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.034108  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.034299  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.034452  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.034599  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.300930  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1008 17:34:27.300963  537626 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1008 17:34:27.331935  537626 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1008 17:34:27.331963  537626 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1008 17:34:27.366200  537626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:34:27.366201  537626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 17:34:27.402587  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:34:27.403408  537626 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1008 17:34:27.403429  537626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1008 17:34:27.404696  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 17:34:27.445572  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1008 17:34:27.445611  537626 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1008 17:34:27.449416  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1008 17:34:27.449448  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1008 17:34:27.497928  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 17:34:27.500719  537626 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1008 17:34:27.500749  537626 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1008 17:34:27.528545  537626 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1008 17:34:27.528583  537626 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1008 17:34:27.532304  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 17:34:27.532330  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1008 17:34:27.535505  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 17:34:27.553335  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 17:34:27.608094  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 17:34:27.634109  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1008 17:34:27.683517  537626 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1008 17:34:27.683544  537626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1008 17:34:27.708868  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1008 17:34:27.708896  537626 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1008 17:34:27.725020  537626 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1008 17:34:27.725046  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1008 17:34:27.727572  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 17:34:27.727592  537626 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 17:34:27.739662  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1008 17:34:27.739688  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1008 17:34:27.749414  537626 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1008 17:34:27.749437  537626 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1008 17:34:27.809410  537626 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1008 17:34:27.809446  537626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1008 17:34:27.897639  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 17:34:27.897681  537626 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 17:34:27.916094  537626 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1008 17:34:27.916131  537626 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1008 17:34:27.918602  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1008 17:34:27.918623  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1008 17:34:27.935476  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1008 17:34:27.935502  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1008 17:34:27.949759  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1008 17:34:27.949784  537626 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1008 17:34:27.958909  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1008 17:34:28.014400  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 17:34:28.068948  537626 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 17:34:28.068973  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1008 17:34:28.087378  537626 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1008 17:34:28.087413  537626 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1008 17:34:28.096141  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1008 17:34:28.100088  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1008 17:34:28.100113  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1008 17:34:28.213586  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1008 17:34:28.213620  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1008 17:34:28.236357  537626 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1008 17:34:28.236381  537626 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1008 17:34:28.273339  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 17:34:28.530307  537626 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1008 17:34:28.530454  537626 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1008 17:34:28.592041  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1008 17:34:28.592072  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1008 17:34:28.754771  537626 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1008 17:34:28.754797  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1008 17:34:28.872490  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1008 17:34:28.911349  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1008 17:34:28.911383  537626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1008 17:34:29.170648  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1008 17:34:29.170676  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1008 17:34:29.237263  537626 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.870968074s)
	I1008 17:34:29.237309  537626 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 17:34:29.237321  537626 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.871076931s)
	I1008 17:34:29.238125  537626 node_ready.go:35] waiting up to 6m0s for node "addons-738106" to be "Ready" ...
	I1008 17:34:29.244216  537626 node_ready.go:49] node "addons-738106" has status "Ready":"True"
	I1008 17:34:29.244245  537626 node_ready.go:38] duration metric: took 6.095882ms for node "addons-738106" to be "Ready" ...
	I1008 17:34:29.244256  537626 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:34:29.258109  537626 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:29.491930  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1008 17:34:29.491958  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1008 17:34:29.743530  537626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-738106" context rescaled to 1 replicas
	I1008 17:34:29.752026  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 17:34:29.752061  537626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1008 17:34:30.149529  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 17:34:31.176702  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.774067143s)
	I1008 17:34:31.176767  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:31.176789  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:31.177129  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:31.177147  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:31.177152  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:31.177178  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:31.177189  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:31.177505  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:31.177529  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:31.177541  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:31.265910  537626 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:33.269016  537626 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:33.993850  537626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1008 17:34:33.993899  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:33.997516  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:33.997970  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:33.997998  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:33.998213  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:33.998471  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:33.998626  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:33.998767  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:34.475290  537626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1008 17:34:34.598232  537626 addons.go:234] Setting addon gcp-auth=true in "addons-738106"
	I1008 17:34:34.598301  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:34.598685  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:34.598755  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:34.614277  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1008 17:34:34.614729  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:34.615205  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:34.615230  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:34.615556  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:34.616191  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:34.616257  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:34.632271  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I1008 17:34:34.632779  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:34.633348  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:34.633381  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:34.633781  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:34.634014  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:34.635682  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:34.635919  537626 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1008 17:34:34.635946  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:34.638878  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:34.639253  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:34.639280  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:34.639468  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:34.639648  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:34.639844  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:34.640029  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:34.914430  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.509702088s)
	I1008 17:34:34.914490  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914502  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914505  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.416534256s)
	I1008 17:34:34.914554  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914559  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.379027448s)
	I1008 17:34:34.914573  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914598  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914615  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914604  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.361244559s)
	I1008 17:34:34.914634  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.306516511s)
	I1008 17:34:34.914674  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.280529257s)
	I1008 17:34:34.914683  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914690  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914693  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914707  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914708  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914741  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.955804509s)
	I1008 17:34:34.914747  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914756  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914765  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914868  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.900429137s)
	I1008 17:34:34.914891  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914901  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914993  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.818822946s)
	I1008 17:34:34.915010  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915023  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915160  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.641788358s)
	W1008 17:34:34.915192  537626 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 17:34:34.915222  537626 retry.go:31] will retry after 287.200789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 17:34:34.915317  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.042794512s)
	I1008 17:34:34.915344  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915354  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915392  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.915404  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.915413  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915421  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915475  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.915508  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.915516  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.915525  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915531  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915669  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.915698  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.915704  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.915711  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915718  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916083  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916111  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916118  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916124  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916130  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916379  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916415  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916429  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916451  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916456  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916462  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916468  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916511  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916517  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916523  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916528  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916566  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916572  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916578  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916584  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916666  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916713  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916719  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916728  537626 addons.go:475] Verifying addon registry=true in "addons-738106"
	I1008 17:34:34.918587  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.918614  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.918620  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.918628  537626 addons.go:475] Verifying addon ingress=true in "addons-738106"
	I1008 17:34:34.918750  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.918758  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.918766  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.918772  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.918816  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.918841  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.918847  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.918853  537626 addons.go:475] Verifying addon metrics-server=true in "addons-738106"
	I1008 17:34:34.919080  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919119  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.919126  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919502  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919529  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921226  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919584  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919609  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919618  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921291  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919622  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919638  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921329  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921343  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.921354  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.919641  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921379  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919661  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919676  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919692  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921472  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921480  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.921487  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.919711  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921528  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921626  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.921659  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921666  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921071  537626 out.go:177] * Verifying ingress addon...
	I1008 17:34:34.921850  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921868  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921090  537626 out.go:177] * Verifying registry addon...
	I1008 17:34:34.923202  537626 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-738106 service yakd-dashboard -n yakd-dashboard
	
	I1008 17:34:34.924168  537626 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1008 17:34:34.924168  537626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1008 17:34:34.948329  537626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 17:34:34.948356  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:34.948494  537626 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1008 17:34:34.948508  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:34.979980  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.980004  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.980326  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.980341  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	W1008 17:34:34.980450  537626 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1008 17:34:34.982846  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.982862  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.983132  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.983148  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:35.202679  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 17:34:35.461358  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:35.462051  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:35.598228  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.448639113s)
	I1008 17:34:35.598291  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:35.598307  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:35.598624  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:35.598645  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:35.598655  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:35.598664  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:35.599019  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:35.599054  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:35.599072  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:35.599091  537626 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-738106"
	I1008 17:34:35.599696  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 17:34:35.600395  537626 out.go:177] * Verifying csi-hostpath-driver addon...
	I1008 17:34:35.601634  537626 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1008 17:34:35.602270  537626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1008 17:34:35.602718  537626 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1008 17:34:35.602737  537626 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1008 17:34:35.627382  537626 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 17:34:35.627406  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:35.717071  537626 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1008 17:34:35.717105  537626 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1008 17:34:35.771634  537626 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:35.828934  537626 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 17:34:35.828963  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1008 17:34:35.866051  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 17:34:35.929217  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:35.929730  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:36.109938  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:36.428529  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:36.428869  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:36.607438  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:36.947918  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:36.948171  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:37.109527  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:37.383182  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.180422305s)
	I1008 17:34:37.383210  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.517121439s)
	I1008 17:34:37.383243  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.383260  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.383260  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.383276  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.383603  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:37.383613  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.383627  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.383637  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.383644  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.383881  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.383898  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.385502  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.385555  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.385578  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.385597  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.385823  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.385842  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.385876  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:37.387125  537626 addons.go:475] Verifying addon gcp-auth=true in "addons-738106"
	I1008 17:34:37.388872  537626 out.go:177] * Verifying gcp-auth addon...
	I1008 17:34:37.390870  537626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1008 17:34:37.408047  537626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 17:34:37.408066  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:37.441365  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:37.442049  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:37.607422  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:37.769557  537626 pod_ready.go:93] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:37.769588  537626 pod_ready.go:82] duration metric: took 8.511452172s for pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:37.769600  537626 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:37.897291  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:37.928732  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:37.929204  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:38.111633  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:38.395143  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:38.428627  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:38.429183  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:38.607682  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:38.896034  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:38.928630  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:38.928898  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:39.111114  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:39.296621  537626 pod_ready.go:98] pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.48 HostIPs:[{IP:192.168.39.
48}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-08 17:34:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-08 17:34:32 +0000 UTC,FinishedAt:2024-10-08 17:34:37 +0000 UTC,ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e Started:0xc00294b440 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029058e0} {Name:kube-api-access-2mxkw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0029058f0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1008 17:34:39.296656  537626 pod_ready.go:82] duration metric: took 1.527048083s for pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace to be "Ready" ...
	E1008 17:34:39.296672  537626 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.48 HostIPs:[{IP:192.168.39.48}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-08 17:34:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-08 17:34:32 +0000 UTC,FinishedAt:2024-10-08 17:34:37 +0000 UTC,ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e Started:0xc00294b440 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029058e0} {Name:kube-api-access-2mxkw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0029058f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1008 17:34:39.296692  537626 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.331936  537626 pod_ready.go:93] pod "etcd-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.331962  537626 pod_ready.go:82] duration metric: took 35.25898ms for pod "etcd-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.331983  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.338964  537626 pod_ready.go:93] pod "kube-apiserver-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.338986  537626 pod_ready.go:82] duration metric: took 6.993302ms for pod "kube-apiserver-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.338997  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.346652  537626 pod_ready.go:93] pod "kube-controller-manager-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.346672  537626 pod_ready.go:82] duration metric: took 7.66745ms for pod "kube-controller-manager-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.346684  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7clnt" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.361811  537626 pod_ready.go:93] pod "kube-proxy-7clnt" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.361834  537626 pod_ready.go:82] duration metric: took 15.142018ms for pod "kube-proxy-7clnt" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.361844  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.411880  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:39.433069  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:39.434810  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:39.607399  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:39.763599  537626 pod_ready.go:93] pod "kube-scheduler-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.763631  537626 pod_ready.go:82] duration metric: took 401.7777ms for pod "kube-scheduler-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.763646  537626 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.894381  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:39.934736  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:39.935241  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:40.108178  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:40.402357  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:40.429131  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:40.431577  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:40.607247  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:40.895778  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:40.928419  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:40.930147  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:41.116312  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:41.542501  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:41.542744  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:41.544763  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:41.606612  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:41.769394  537626 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:41.895309  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:41.928518  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:41.929279  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:42.107156  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:42.394536  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:42.429931  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:42.430674  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:42.609645  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:42.894946  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:42.928811  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:42.929115  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:43.106403  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:43.395784  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:43.429122  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:43.430345  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:43.608653  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:43.770557  537626 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:43.894914  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:43.931082  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:43.931841  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:44.107117  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:44.394760  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:44.427980  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:44.428696  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:44.607591  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:44.895791  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:44.928407  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:44.928654  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:45.106683  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:45.396219  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:45.429182  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:45.429897  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:45.608666  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:45.770001  537626 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:45.770026  537626 pod_ready.go:82] duration metric: took 6.006371678s for pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:45.770035  537626 pod_ready.go:39] duration metric: took 16.525763483s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:34:45.770051  537626 api_server.go:52] waiting for apiserver process to appear ...
	I1008 17:34:45.770103  537626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 17:34:45.787325  537626 api_server.go:72] duration metric: took 18.937121492s to wait for apiserver process to appear ...
	I1008 17:34:45.787354  537626 api_server.go:88] waiting for apiserver healthz status ...
	I1008 17:34:45.787377  537626 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1008 17:34:45.792397  537626 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1008 17:34:45.793228  537626 api_server.go:141] control plane version: v1.31.1
	I1008 17:34:45.793249  537626 api_server.go:131] duration metric: took 5.888645ms to wait for apiserver health ...
	I1008 17:34:45.793257  537626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 17:34:45.800410  537626 system_pods.go:59] 17 kube-system pods found
	I1008 17:34:45.800435  537626 system_pods.go:61] "coredns-7c65d6cfc9-4zs69" [a555f46c-9cef-4b78-a31f-6ad3cd88c338] Running
	I1008 17:34:45.800443  537626 system_pods.go:61] "csi-hostpath-attacher-0" [db6e092c-da8c-46ea-8e60-b2c9a91b4497] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 17:34:45.800451  537626 system_pods.go:61] "csi-hostpath-resizer-0" [70c956d8-0d97-477b-a407-7e74b8d53685] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 17:34:45.800460  537626 system_pods.go:61] "csi-hostpathplugin-r4djc" [64366d61-0edb-46a5-8813-2d30575552a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 17:34:45.800468  537626 system_pods.go:61] "etcd-addons-738106" [72430698-e927-4d04-8392-0bfc6eb98c60] Running
	I1008 17:34:45.800473  537626 system_pods.go:61] "kube-apiserver-addons-738106" [3af39427-8de7-4cf3-93c5-783349179428] Running
	I1008 17:34:45.800477  537626 system_pods.go:61] "kube-controller-manager-addons-738106" [660c3a28-4781-4e08-a328-9d59d85d6245] Running
	I1008 17:34:45.800482  537626 system_pods.go:61] "kube-ingress-dns-minikube" [2ed789e2-91c6-459c-8366-72e74bc03132] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 17:34:45.800488  537626 system_pods.go:61] "kube-proxy-7clnt" [e9720997-cb8e-4870-8f6b-9b3bc1a30218] Running
	I1008 17:34:45.800492  537626 system_pods.go:61] "kube-scheduler-addons-738106" [45b2c7a7-8c10-4894-bad7-5af6f70a4b83] Running
	I1008 17:34:45.800497  537626 system_pods.go:61] "metrics-server-84c5f94fbc-w72vc" [01f00ce3-494b-4d47-ab30-2439d417f6b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 17:34:45.800500  537626 system_pods.go:61] "nvidia-device-plugin-daemonset-dz2k9" [42202b26-4c49-44bb-836f-cfcd7b7a3a5f] Running
	I1008 17:34:45.800506  537626 system_pods.go:61] "registry-66c9cd494c-wsg7d" [1e47d1a8-5e9a-4214-9302-306efa48abeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 17:34:45.800511  537626 system_pods.go:61] "registry-proxy-6hj56" [0c50d7bc-8a1f-4eb6-a83a-d29fda2e2722] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 17:34:45.800518  537626 system_pods.go:61] "snapshot-controller-56fcc65765-4rtbq" [fe86a2d5-d3af-4ca8-8c16-4a43b4d10a1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.800526  537626 system_pods.go:61] "snapshot-controller-56fcc65765-6bdsg" [e24c8dfd-265c-4e3a-82c3-41ce76e322f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.800554  537626 system_pods.go:61] "storage-provisioner" [1b01ab9a-1013-49d5-9c61-88a751457598] Running
	I1008 17:34:45.800563  537626 system_pods.go:74] duration metric: took 7.299999ms to wait for pod list to return data ...
	I1008 17:34:45.800569  537626 default_sa.go:34] waiting for default service account to be created ...
	I1008 17:34:45.802607  537626 default_sa.go:45] found service account: "default"
	I1008 17:34:45.802622  537626 default_sa.go:55] duration metric: took 2.048023ms for default service account to be created ...
	I1008 17:34:45.802628  537626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 17:34:45.811153  537626 system_pods.go:86] 17 kube-system pods found
	I1008 17:34:45.811175  537626 system_pods.go:89] "coredns-7c65d6cfc9-4zs69" [a555f46c-9cef-4b78-a31f-6ad3cd88c338] Running
	I1008 17:34:45.811182  537626 system_pods.go:89] "csi-hostpath-attacher-0" [db6e092c-da8c-46ea-8e60-b2c9a91b4497] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 17:34:45.811190  537626 system_pods.go:89] "csi-hostpath-resizer-0" [70c956d8-0d97-477b-a407-7e74b8d53685] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 17:34:45.811197  537626 system_pods.go:89] "csi-hostpathplugin-r4djc" [64366d61-0edb-46a5-8813-2d30575552a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 17:34:45.811202  537626 system_pods.go:89] "etcd-addons-738106" [72430698-e927-4d04-8392-0bfc6eb98c60] Running
	I1008 17:34:45.811206  537626 system_pods.go:89] "kube-apiserver-addons-738106" [3af39427-8de7-4cf3-93c5-783349179428] Running
	I1008 17:34:45.811210  537626 system_pods.go:89] "kube-controller-manager-addons-738106" [660c3a28-4781-4e08-a328-9d59d85d6245] Running
	I1008 17:34:45.811215  537626 system_pods.go:89] "kube-ingress-dns-minikube" [2ed789e2-91c6-459c-8366-72e74bc03132] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 17:34:45.811219  537626 system_pods.go:89] "kube-proxy-7clnt" [e9720997-cb8e-4870-8f6b-9b3bc1a30218] Running
	I1008 17:34:45.811222  537626 system_pods.go:89] "kube-scheduler-addons-738106" [45b2c7a7-8c10-4894-bad7-5af6f70a4b83] Running
	I1008 17:34:45.811226  537626 system_pods.go:89] "metrics-server-84c5f94fbc-w72vc" [01f00ce3-494b-4d47-ab30-2439d417f6b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 17:34:45.811231  537626 system_pods.go:89] "nvidia-device-plugin-daemonset-dz2k9" [42202b26-4c49-44bb-836f-cfcd7b7a3a5f] Running
	I1008 17:34:45.811236  537626 system_pods.go:89] "registry-66c9cd494c-wsg7d" [1e47d1a8-5e9a-4214-9302-306efa48abeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 17:34:45.811241  537626 system_pods.go:89] "registry-proxy-6hj56" [0c50d7bc-8a1f-4eb6-a83a-d29fda2e2722] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 17:34:45.811246  537626 system_pods.go:89] "snapshot-controller-56fcc65765-4rtbq" [fe86a2d5-d3af-4ca8-8c16-4a43b4d10a1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.811255  537626 system_pods.go:89] "snapshot-controller-56fcc65765-6bdsg" [e24c8dfd-265c-4e3a-82c3-41ce76e322f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.811259  537626 system_pods.go:89] "storage-provisioner" [1b01ab9a-1013-49d5-9c61-88a751457598] Running
	I1008 17:34:45.811265  537626 system_pods.go:126] duration metric: took 8.632263ms to wait for k8s-apps to be running ...
	I1008 17:34:45.811272  537626 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 17:34:45.811316  537626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:34:45.833679  537626 system_svc.go:56] duration metric: took 22.401969ms WaitForService to wait for kubelet
	I1008 17:34:45.833703  537626 kubeadm.go:582] duration metric: took 18.983505627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:34:45.833721  537626 node_conditions.go:102] verifying NodePressure condition ...
	I1008 17:34:45.836687  537626 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:34:45.836708  537626 node_conditions.go:123] node cpu capacity is 2
	I1008 17:34:45.836720  537626 node_conditions.go:105] duration metric: took 2.982947ms to run NodePressure ...
	I1008 17:34:45.836731  537626 start.go:241] waiting for startup goroutines ...
	I1008 17:34:45.893899  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:45.928576  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:45.928717  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:46.107866  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:46.396044  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:46.430721  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:46.431131  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:46.607503  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:46.893786  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:46.928861  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:46.929340  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:47.107289  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:47.395458  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:47.428751  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:47.431521  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:47.606676  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:47.894081  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:47.929040  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:47.929305  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:48.107200  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:48.395201  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:48.429769  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:48.430241  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:48.607015  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:48.895132  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:48.932593  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:48.932897  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:49.107570  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:49.423087  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:49.429895  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:49.430388  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:49.607072  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:49.894699  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:49.928425  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:49.928826  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:50.107664  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:50.396263  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:50.428395  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:50.429759  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:50.608434  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:50.894630  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:50.928162  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:50.928452  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:51.107102  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:51.395923  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:51.432031  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:51.432067  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:51.607269  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:51.894993  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:51.929168  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:51.930627  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:52.110183  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:52.397037  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:52.429571  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:52.430013  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:52.607980  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:52.896411  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:52.930334  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:52.930485  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:53.107450  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:53.396230  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:53.429182  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:53.429851  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:53.607034  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:53.895219  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:53.928832  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:53.929099  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:54.106713  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:54.396916  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:54.428764  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:54.429122  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:54.606480  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:54.895116  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:54.928193  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:54.929752  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:55.107590  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:55.395356  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:55.435392  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:55.435865  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:55.609495  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:55.895374  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:55.929546  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:55.929841  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:56.109026  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:56.396668  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:56.429858  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:56.429872  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:56.606777  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:56.894379  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:56.929854  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:56.931344  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:57.108899  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:57.396943  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:57.429196  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:57.429843  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:57.611082  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:57.895285  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:57.929224  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:57.930728  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:58.106897  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:58.398715  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:58.429119  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:58.429559  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:58.610121  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:58.894876  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:58.928814  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:58.928942  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:59.106812  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:59.394294  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:59.428751  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:59.429797  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:59.607191  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:59.895251  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:59.928023  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:59.929669  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:00.107133  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:00.415367  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:00.432023  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:00.438181  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:00.609351  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:00.895030  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:00.931923  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:00.932188  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:01.112212  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:01.394435  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:01.431143  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:01.442512  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:01.606829  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:01.894718  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:01.928597  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:01.929923  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:02.107880  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:02.394030  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:02.430499  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:02.430775  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:02.607808  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:03.226700  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:03.226887  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:03.227352  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:03.227612  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:03.393893  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:03.428791  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:03.429121  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:03.607113  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:03.895505  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:03.928261  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:03.928285  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:04.106908  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:04.394750  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:04.429620  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:04.429835  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:04.609009  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:04.894617  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:04.928745  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:04.929291  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:05.107697  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:05.524427  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:05.524768  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:05.525001  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:05.606802  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:05.894780  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:05.928589  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:05.928995  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:06.108743  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:06.394038  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:06.428384  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:06.429511  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:06.606886  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:06.894738  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:06.930164  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:06.930506  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:07.107379  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:07.394530  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:07.428331  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:07.429641  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:07.607726  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:07.895254  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:07.929396  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:07.929955  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:08.106521  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:08.394147  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:08.428836  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:08.429184  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:08.607296  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:08.895839  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:08.998831  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:09.000108  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:09.107674  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:09.394525  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:09.429351  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:09.430242  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:09.607364  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:09.896784  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:09.930013  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:09.931125  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:10.107233  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:10.396576  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:10.428298  537626 kapi.go:107] duration metric: took 35.504126288s to wait for kubernetes.io/minikube-addons=registry ...
	I1008 17:35:10.430374  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:10.606765  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:10.894611  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:10.927885  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:11.107475  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:11.394708  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:11.428795  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:11.607132  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:11.895535  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:12.310809  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:12.314395  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:12.406237  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:12.428465  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:12.607277  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:12.894975  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:12.929553  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:13.108571  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:13.406410  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:13.430344  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:13.607876  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:13.895998  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:13.934904  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:14.106201  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:14.395667  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:14.429201  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:14.608022  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:14.894989  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:14.928941  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:15.110470  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:15.396682  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:15.429396  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:15.607015  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:15.894520  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:15.929194  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:16.106711  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:16.404729  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:16.428152  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:16.607122  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:16.895279  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:16.996650  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:17.106549  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:17.395611  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:17.427975  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:17.606909  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:17.894372  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:17.929056  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:18.106425  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:18.398309  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:18.471451  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:18.607243  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:18.894766  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:18.928301  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:19.107679  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:19.394480  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:19.429130  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:19.607736  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:19.894139  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:19.929298  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:20.106962  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:20.402356  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:20.428990  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:20.606490  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:20.894966  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:20.928733  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:21.107085  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:21.395208  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:21.429853  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:21.607240  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:21.894394  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:21.929341  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:22.110156  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:22.405596  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:22.504403  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:22.607787  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:22.894552  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:22.928307  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:23.106393  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:23.394661  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:23.428310  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:23.606631  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:23.895251  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:23.930279  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:24.106838  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:24.394778  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:24.428843  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:24.607105  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:24.895278  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:24.929200  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:25.107136  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:25.402566  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:25.435660  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:25.608483  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:25.894491  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:25.928921  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:26.107934  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:26.394774  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:26.428272  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:26.607080  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:26.894487  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:26.929792  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:27.107393  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:27.396657  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:27.427969  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:27.607120  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:27.894645  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:27.928219  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:28.107548  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:28.396950  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:28.427742  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:28.608002  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:28.894097  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:28.929147  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:29.107348  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:29.394005  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:29.428200  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:29.607134  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:29.895065  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:29.928192  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:30.107287  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:30.398431  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:30.429883  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:30.606733  537626 kapi.go:107] duration metric: took 55.004459155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1008 17:35:30.895274  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:30.929047  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:31.394719  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:31.427922  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:31.894711  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:31.928327  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:32.394854  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:32.428132  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:32.896008  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:32.928293  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:33.394716  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:33.429206  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:33.893999  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:33.928975  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:34.396538  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:34.427728  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:34.896333  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:34.929069  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:35.395514  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:35.429284  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:35.895223  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:35.929205  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:36.397319  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:36.428544  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:36.894587  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:36.928004  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:37.395517  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:37.433811  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:37.895047  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:37.929982  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:38.396635  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:38.428244  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:38.894670  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:38.928115  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:39.394212  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:39.429293  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:39.894046  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:39.928470  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:40.395482  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:40.496836  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:40.902607  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:40.930108  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:41.394507  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:41.428638  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:41.894645  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:41.928493  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:42.403178  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:42.496852  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:42.896654  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:42.928626  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:43.394350  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:43.429485  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:43.894736  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:43.930490  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:44.394664  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:44.428155  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:44.894611  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:44.928143  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:45.459821  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:45.460202  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:45.895045  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:45.928552  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:46.397064  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:46.428483  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:46.894912  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:46.928330  537626 kapi.go:107] duration metric: took 1m12.004158927s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1008 17:35:47.394200  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:47.894693  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:48.394945  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:48.895361  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:49.394897  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:49.899133  537626 kapi.go:107] duration metric: took 1m12.508257183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1008 17:35:49.900781  537626 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-738106 cluster.
	I1008 17:35:49.902155  537626 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1008 17:35:49.903459  537626 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1008 17:35:49.904727  537626 out.go:177] * Enabled addons: storage-provisioner, metrics-server, cloud-spanner, nvidia-device-plugin, inspektor-gadget, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1008 17:35:49.906519  537626 addons.go:510] duration metric: took 1m23.056407589s for enable addons: enabled=[storage-provisioner metrics-server cloud-spanner nvidia-device-plugin inspektor-gadget ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1008 17:35:49.906567  537626 start.go:246] waiting for cluster config update ...
	I1008 17:35:49.906588  537626 start.go:255] writing updated cluster config ...
	I1008 17:35:49.907193  537626 ssh_runner.go:195] Run: rm -f paused
	I1008 17:35:49.960396  537626 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 17:35:49.961635  537626 out.go:177] * Done! kubectl is now configured to use "addons-738106" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.200121521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1345bb84-9d65-49f5-9a2a-28acc70f7d6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.200173698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1345bb84-9d65-49f5-9a2a-28acc70f7d6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.200529429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_CREATED,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 54229a0d-9b3f-4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c1855ee299a9e2fec8636112cba1faa2727cddd3af2f8a94261c265934ab35,PodSandboxId:7994e0818e0b22c2ab1e8667818caf03a56b9dbc1768d2e90c90db371ec9dfc6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728408946123492066,Labels:map[string]string{io.kubernetes.container.name: c
ontroller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-m5nkv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e1394fa-bf21-42c4-bfb9-9cea5e8b7807,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:54d2ee50f6b585ec81258274339d746d02684af5e2a46f5c97f627c81c61ccc7,PodSandboxId:9f2cd0dad4f7c9472cbc23cfbf817ea96c1d91d08b9bcdb79b3a691272822ae5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&Imag
eSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916840042587,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q8l6x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 221f877b-4912-4a1c-93a2-b3ce8e903373,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4884b8931b0e6e1cf93af55355fc76ad5c0bfb2ce71e37f0fc0caf5f3230690,PodSandboxId:984457987e1703c7795b51f92f35925c73cdec1cd94b2bcc2f8506c7a9fb8fc8,Metadata:&ContainerMetadata
{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916717444192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zvmmq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9f554dd-66a2-42b8-a667-ea4babb2810a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d
3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1d
e,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728408899950619994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48613ad87931f39cdb015e9d6152011b53821ec5bed99303bdb6fe8ae858d8d3,PodSandboxId:d9c12cc81d24e61fd9aed0defa772b25012c47410b7b65c69d30415dfd987d06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728408891690382561,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed789e2-91c6-459c-8366-72e74bc03132,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac
8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},An
notations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1345bb84-9d65-49f5-9a2a-28acc70f7d6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.209192784Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=2128fd65-8219-4359-8b05-cbadb27e25ac name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.254825022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9e6eade-646f-4fc3-866c-54c248722c47 name=/runtime.v1.RuntimeService/Version
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.254893981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9e6eade-646f-4fc3-866c-54c248722c47 name=/runtime.v1.RuntimeService/Version
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.256186894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7c5a8c5-3d00-454d-8df3-7f25bb97f2a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.257319109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409606257296785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573880,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7c5a8c5-3d00-454d-8df3-7f25bb97f2a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.257840750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90039070-875a-40ef-ada0-9fb2c46feef9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.257895702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90039070-875a-40ef-ada0-9fb2c46feef9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.258322841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 54229a0d-9b3f-4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c1855ee299a9e2fec8636112cba1faa2727cddd3af2f8a94261c265934ab35,PodSandboxId:7994e0818e0b22c2ab1e8667818caf03a56b9dbc1768d2e90c90db371ec9dfc6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728408946123492066,Labels:map[string]string{io.kubernetes.container.name: c
ontroller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-m5nkv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e1394fa-bf21-42c4-bfb9-9cea5e8b7807,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:54d2ee50f6b585ec81258274339d746d02684af5e2a46f5c97f627c81c61ccc7,PodSandboxId:9f2cd0dad4f7c9472cbc23cfbf817ea96c1d91d08b9bcdb79b3a691272822ae5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&Imag
eSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916840042587,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q8l6x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 221f877b-4912-4a1c-93a2-b3ce8e903373,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4884b8931b0e6e1cf93af55355fc76ad5c0bfb2ce71e37f0fc0caf5f3230690,PodSandboxId:984457987e1703c7795b51f92f35925c73cdec1cd94b2bcc2f8506c7a9fb8fc8,Metadata:&ContainerMetadata
{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916717444192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zvmmq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9f554dd-66a2-42b8-a667-ea4babb2810a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d
3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1d
e,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728408899950619994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48613ad87931f39cdb015e9d6152011b53821ec5bed99303bdb6fe8ae858d8d3,PodSandboxId:d9c12cc81d24e61fd9aed0defa772b25012c47410b7b65c69d30415dfd987d06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728408891690382561,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed789e2-91c6-459c-8366-72e74bc03132,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac
8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},An
notations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90039070-875a-40ef-ada0-9fb2c46feef9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.290282038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4906f086-feb6-49cf-81d7-55d5c061dc4b name=/runtime.v1.RuntimeService/Version
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.290344335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4906f086-feb6-49cf-81d7-55d5c061dc4b name=/runtime.v1.RuntimeService/Version
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.291558163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0d9d95f-9859-460c-a16c-9d6425ff8e78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.293044901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409606293020772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573880,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0d9d95f-9859-460c-a16c-9d6425ff8e78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.293635570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c91e3fd-378b-42e3-acde-7e1e6df3a93b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.293712632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c91e3fd-378b-42e3-acde-7e1e6df3a93b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.294149169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 54229a0d-9b3f-4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c1855ee299a9e2fec8636112cba1faa2727cddd3af2f8a94261c265934ab35,PodSandboxId:7994e0818e0b22c2ab1e8667818caf03a56b9dbc1768d2e90c90db371ec9dfc6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728408946123492066,Labels:map[string]string{io.kubernetes.container.name: c
ontroller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-m5nkv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e1394fa-bf21-42c4-bfb9-9cea5e8b7807,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:54d2ee50f6b585ec81258274339d746d02684af5e2a46f5c97f627c81c61ccc7,PodSandboxId:9f2cd0dad4f7c9472cbc23cfbf817ea96c1d91d08b9bcdb79b3a691272822ae5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&Imag
eSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916840042587,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q8l6x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 221f877b-4912-4a1c-93a2-b3ce8e903373,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4884b8931b0e6e1cf93af55355fc76ad5c0bfb2ce71e37f0fc0caf5f3230690,PodSandboxId:984457987e1703c7795b51f92f35925c73cdec1cd94b2bcc2f8506c7a9fb8fc8,Metadata:&ContainerMetadata
{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916717444192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zvmmq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9f554dd-66a2-42b8-a667-ea4babb2810a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d
3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1d
e,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728408899950619994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48613ad87931f39cdb015e9d6152011b53821ec5bed99303bdb6fe8ae858d8d3,PodSandboxId:d9c12cc81d24e61fd9aed0defa772b25012c47410b7b65c69d30415dfd987d06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728408891690382561,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed789e2-91c6-459c-8366-72e74bc03132,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac
8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},An
notations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c91e3fd-378b-42e3-acde-7e1e6df3a93b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.329252002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05c5b7d8-a881-4335-8b3f-a6eed08793d7 name=/runtime.v1.RuntimeService/Version
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.329338703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05c5b7d8-a881-4335-8b3f-a6eed08793d7 name=/runtime.v1.RuntimeService/Version
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.330993693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0ec4716-05e5-4c16-85e2-1c485f6b6dbf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.332154374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409606332127613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573880,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0ec4716-05e5-4c16-85e2-1c485f6b6dbf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.332736714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=494d11bd-c1bd-48df-83b6-222c6e2cae0c name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.332814209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=494d11bd-c1bd-48df-83b6-222c6e2cae0c name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:46:46 addons-738106 crio[663]: time="2024-10-08 17:46:46.333254327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 54229a0d-9b3f-4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c1855ee299a9e2fec8636112cba1faa2727cddd3af2f8a94261c265934ab35,PodSandboxId:7994e0818e0b22c2ab1e8667818caf03a56b9dbc1768d2e90c90db371ec9dfc6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728408946123492066,Labels:map[string]string{io.kubernetes.container.name: c
ontroller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-m5nkv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e1394fa-bf21-42c4-bfb9-9cea5e8b7807,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:54d2ee50f6b585ec81258274339d746d02684af5e2a46f5c97f627c81c61ccc7,PodSandboxId:9f2cd0dad4f7c9472cbc23cfbf817ea96c1d91d08b9bcdb79b3a691272822ae5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&Imag
eSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916840042587,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-q8l6x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 221f877b-4912-4a1c-93a2-b3ce8e903373,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4884b8931b0e6e1cf93af55355fc76ad5c0bfb2ce71e37f0fc0caf5f3230690,PodSandboxId:984457987e1703c7795b51f92f35925c73cdec1cd94b2bcc2f8506c7a9fb8fc8,Metadata:&ContainerMetadata
{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728408916717444192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zvmmq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9f554dd-66a2-42b8-a667-ea4babb2810a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d
3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1d
e,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728408899950619994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48613ad87931f39cdb015e9d6152011b53821ec5bed99303bdb6fe8ae858d8d3,PodSandboxId:d9c12cc81d24e61fd9aed0defa772b25012c47410b7b65c69d30415dfd987d06,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728408891690382561,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed789e2-91c6-459c-8366-72e74bc03132,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac
8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},An
notations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=494d11bd-c1bd-48df-83b6-222c6e2cae0c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	4a6eda8e92df2       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   8033a1381ebd3       hello-world-app-55bf9c44b4-hkkxb
	7a1ca0c06c839       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   3f6b2acb99d73       nginx
	08160c6eb321a       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                        2 minutes ago            Running             headlamp                  0                   de4daf79167d6       headlamp-7b5c95b59d-tn9fh
	47c1855ee299a       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             11 minutes ago           Running             controller                0                   7994e0818e0b2       ingress-nginx-controller-bc57996ff-m5nkv
	54d2ee50f6b58       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago           Exited              patch                     0                   9f2cd0dad4f7c       ingress-nginx-admission-patch-q8l6x
	e4884b8931b0e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago           Exited              create                    0                   984457987e170       ingress-nginx-admission-create-zvmmq
	1a5c0b2131351       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago           Running             local-path-provisioner    0                   9d8268bb8e46d       local-path-provisioner-86d989889c-xzzz5
	60ddaa32bde36       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago           Running             metrics-server            0                   9d92fd6a40c0a       metrics-server-84c5f94fbc-w72vc
	48613ad87931f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             11 minutes ago           Running             minikube-ingress-dns      0                   d9c12cc81d24e       kube-ingress-dns-minikube
	6a4f440e54303       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago           Running             storage-provisioner       0                   bedc5f99e2d23       storage-provisioner
	b662f6217a7f0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago           Running             coredns                   0                   38da5754ac9d2       coredns-7c65d6cfc9-4zs69
	2a19a2f8241f5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago           Running             kube-proxy                0                   d6e44ed8aed04       kube-proxy-7clnt
	b83af138a30ef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago           Running             kube-scheduler            0                   e7f5dc9fe6f85       kube-scheduler-addons-738106
	7798fd88ce5cc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago           Running             etcd                      0                   0be3e9d8cb0a9       etcd-addons-738106
	c5040fb76a212       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago           Running             kube-controller-manager   0                   6fd03453e7f22       kube-controller-manager-addons-738106
	1f2191692905b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago           Running             kube-apiserver            0                   9d01b0820052a       kube-apiserver-addons-738106
	
	
	==> coredns [b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4] <==
	[INFO] 10.244.0.7:50550 - 25999 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000388398s
	[INFO] 10.244.0.7:50550 - 5784 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000088328s
	[INFO] 10.244.0.7:50550 - 7395 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000082407s
	[INFO] 10.244.0.7:50550 - 20011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097105s
	[INFO] 10.244.0.7:50550 - 44864 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000066033s
	[INFO] 10.244.0.7:50550 - 107 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000085911s
	[INFO] 10.244.0.7:50550 - 23491 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00006549s
	[INFO] 10.244.0.7:58716 - 31573 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088938s
	[INFO] 10.244.0.7:58716 - 31286 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101368s
	[INFO] 10.244.0.7:40706 - 44628 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091198s
	[INFO] 10.244.0.7:40706 - 44440 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000098547s
	[INFO] 10.244.0.7:35076 - 6785 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012626s
	[INFO] 10.244.0.7:35076 - 6581 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083679s
	[INFO] 10.244.0.7:36319 - 43125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000081748s
	[INFO] 10.244.0.7:36319 - 42932 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145831s
	[INFO] 10.244.0.21:44980 - 51630 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000477107s
	[INFO] 10.244.0.21:50082 - 61907 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164328s
	[INFO] 10.244.0.21:58799 - 52901 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105705s
	[INFO] 10.244.0.21:39487 - 53132 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080963s
	[INFO] 10.244.0.21:43497 - 1710 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089261s
	[INFO] 10.244.0.21:39794 - 283 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163054s
	[INFO] 10.244.0.21:41700 - 22206 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.004201125s
	[INFO] 10.244.0.21:36589 - 62420 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004644377s
	[INFO] 10.244.0.27:42280 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000534716s
	[INFO] 10.244.0.27:49171 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106904s
	
	
	==> describe nodes <==
	Name:               addons-738106
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-738106
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=addons-738106
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T17_34_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-738106
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:34:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-738106
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 17:46:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 17:44:56 +0000   Tue, 08 Oct 2024 17:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 17:44:56 +0000   Tue, 08 Oct 2024 17:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 17:44:56 +0000   Tue, 08 Oct 2024 17:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 17:44:56 +0000   Tue, 08 Oct 2024 17:34:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    addons-738106
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac2bf36f4d4c47babf58620ac692990b
	  System UUID:                ac2bf36f-4d4c-47ba-bf58-620ac692990b
	  Boot ID:                    e599bb5c-42a3-493c-a0fc-c38f314042f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-hkkxb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  headlamp                    headlamp-7b5c95b59d-tn9fh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-m5nkv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-4zs69                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-738106                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-738106                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-738106       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7clnt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-738106                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-w72vc             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xzzz5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-738106 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-738106 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-738106 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-738106 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-738106 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-738106 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-738106 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-738106 event: Registered Node addons-738106 in Controller
	
	
	==> dmesg <==
	[  +0.057200] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.486893] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.096290] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.239141] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.511294] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +4.556940] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.002396] kauditd_printk_skb: 133 callbacks suppressed
	[  +8.152319] kauditd_printk_skb: 75 callbacks suppressed
	[Oct 8 17:35] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.453453] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.291150] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.329215] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.160490] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.451243] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.864803] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.968769] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 8 17:44] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.002286] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.400019] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.061617] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.708490] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.168079] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.009426] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 8 17:45] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 8 17:46] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f] <==
	{"level":"info","ts":"2024-10-08T17:44:17.955300Z","caller":"traceutil/trace.go:171","msg":"trace[263743894] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2105; }","duration":"327.892869ms","start":"2024-10-08T17:44:17.627397Z","end":"2024-10-08T17:44:17.955289Z","steps":["trace[263743894] 'agreement among raft nodes before linearized reading'  (duration: 327.772552ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T17:44:17.955330Z","caller":"traceutil/trace.go:171","msg":"trace[1691474734] transaction","detail":"{read_only:false; response_revision:2105; number_of_response:1; }","duration":"374.919564ms","start":"2024-10-08T17:44:17.580389Z","end":"2024-10-08T17:44:17.955308Z","steps":["trace[1691474734] 'process raft request'  (duration: 374.563477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:17.955330Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T17:44:17.627344Z","time spent":"327.974243ms","remote":"127.0.0.1:39056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-08T17:44:17.955439Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T17:44:17.580374Z","time spent":"374.984396ms","remote":"127.0.0.1:39056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3496,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-5b584cc74-5ftt2\" mod_revision:2104 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-5b584cc74-5ftt2\" value_size:3427 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-5b584cc74-5ftt2\" > >"}
	{"level":"warn","ts":"2024-10-08T17:44:29.403052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.619052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-10-08T17:44:29.403781Z","caller":"traceutil/trace.go:171","msg":"trace[561843564] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2186; }","duration":"232.356493ms","start":"2024-10-08T17:44:29.171411Z","end":"2024-10-08T17:44:29.403768Z","steps":["trace[561843564] 'range keys from in-memory index tree'  (duration: 231.502291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.245315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:29.404510Z","caller":"traceutil/trace.go:171","msg":"trace[818453607] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2186; }","duration":"229.603167ms","start":"2024-10-08T17:44:29.174898Z","end":"2024-10-08T17:44:29.404501Z","steps":["trace[818453607] 'range keys from in-memory index tree'  (duration: 228.167673ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.147277ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:29.405298Z","caller":"traceutil/trace.go:171","msg":"trace[783401228] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2186; }","duration":"216.176661ms","start":"2024-10-08T17:44:29.189114Z","end":"2024-10-08T17:44:29.405291Z","steps":["trace[783401228] 'range keys from in-memory index tree'  (duration: 214.1407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.534198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/gadget/gadget-6c44ff6658\" ","response":"range_response_count:1 size:7285"}
	{"level":"info","ts":"2024-10-08T17:44:29.405375Z","caller":"traceutil/trace.go:171","msg":"trace[1399438501] range","detail":"{range_begin:/registry/controllerrevisions/gadget/gadget-6c44ff6658; range_end:; response_count:1; response_revision:2186; }","duration":"295.330393ms","start":"2024-10-08T17:44:29.110039Z","end":"2024-10-08T17:44:29.405370Z","steps":["trace[1399438501] 'range keys from in-memory index tree'  (duration: 293.467775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403600Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.962179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/gadget.kinvolk.io/traces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:29.405505Z","caller":"traceutil/trace.go:171","msg":"trace[1743803437] range","detail":"{range_begin:/registry/gadget.kinvolk.io/traces; range_end:; response_count:0; response_revision:2186; }","duration":"296.862444ms","start":"2024-10-08T17:44:29.108634Z","end":"2024-10-08T17:44:29.405497Z","steps":["trace[1743803437] 'range keys from in-memory index tree'  (duration: 294.935645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.040112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:588"}
	{"level":"info","ts":"2024-10-08T17:44:29.407335Z","caller":"traceutil/trace.go:171","msg":"trace[231416343] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:2186; }","duration":"298.753338ms","start":"2024-10-08T17:44:29.108573Z","end":"2024-10-08T17:44:29.407327Z","steps":["trace[231416343] 'range keys from in-memory index tree'  (duration: 294.929758ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.649745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-pr252\" ","response":"range_response_count:1 size:8385"}
	{"level":"info","ts":"2024-10-08T17:44:29.407451Z","caller":"traceutil/trace.go:171","msg":"trace[251154913] range","detail":"{range_begin:/registry/pods/gadget/gadget-pr252; range_end:; response_count:1; response_revision:2186; }","duration":"297.44885ms","start":"2024-10-08T17:44:29.109996Z","end":"2024-10-08T17:44:29.407445Z","steps":["trace[251154913] 'range keys from in-memory index tree'  (duration: 293.555627ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T17:44:34.294435Z","caller":"traceutil/trace.go:171","msg":"trace[1526887688] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2215; }","duration":"250.74649ms","start":"2024-10-08T17:44:34.043673Z","end":"2024-10-08T17:44:34.294420Z","steps":["trace[1526887688] 'process raft request'  (duration: 250.591942ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T17:44:34.296129Z","caller":"traceutil/trace.go:171","msg":"trace[1914669631] linearizableReadLoop","detail":"{readStateIndex:2370; appliedIndex:2368; }","duration":"123.843833ms","start":"2024-10-08T17:44:34.172214Z","end":"2024-10-08T17:44:34.296058Z","steps":["trace[1914669631] 'read index received'  (duration: 122.118024ms)","trace[1914669631] 'applied index is now lower than readState.Index'  (duration: 1.725048ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-08T17:44:34.296291Z","caller":"traceutil/trace.go:171","msg":"trace[1661852763] transaction","detail":"{read_only:false; response_revision:2216; number_of_response:1; }","duration":"242.132738ms","start":"2024-10-08T17:44:34.054148Z","end":"2024-10-08T17:44:34.296281Z","steps":["trace[1661852763] 'process raft request'  (duration: 241.684283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:34.296413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.182715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:34.296430Z","caller":"traceutil/trace.go:171","msg":"trace[301378954] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2216; }","duration":"124.216018ms","start":"2024-10-08T17:44:34.172210Z","end":"2024-10-08T17:44:34.296426Z","steps":["trace[301378954] 'agreement among raft nodes before linearized reading'  (duration: 124.170006ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:34.296512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.667688ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:34.296525Z","caller":"traceutil/trace.go:171","msg":"trace[1234633695] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2216; }","duration":"106.682361ms","start":"2024-10-08T17:44:34.189839Z","end":"2024-10-08T17:44:34.296521Z","steps":["trace[1234633695] 'agreement among raft nodes before linearized reading'  (duration: 106.658092ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:46:46 up 13 min,  0 users,  load average: 0.30, 0.33, 0.29
	Linux addons-738106 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528] <==
	 > logger="UnhandledError"
	E1008 17:36:02.407113       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.19.223:443: connect: connection refused" logger="UnhandledError"
	E1008 17:36:02.410612       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.19.223:443: connect: connection refused" logger="UnhandledError"
	E1008 17:36:02.415894       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.19.223:443: connect: connection refused" logger="UnhandledError"
	I1008 17:36:02.480910       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1008 17:44:13.553406       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.137.147"}
	I1008 17:44:25.153781       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1008 17:44:25.323571       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.35.237"}
	I1008 17:44:29.087314       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1008 17:44:30.442496       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1008 17:44:42.239745       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1008 17:45:01.847768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.847832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.879808       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.879904       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.900361       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.900462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.902378       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.902669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.923826       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.923882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1008 17:45:02.901589       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1008 17:45:02.926529       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1008 17:45:03.049327       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1008 17:46:44.997149       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.122.115"}
	
	
	==> kube-controller-manager [c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607] <==
	I1008 17:45:26.159309       1 shared_informer.go:320] Caches are synced for garbage collector
	W1008 17:45:32.861377       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:45:32.861495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:45:33.891726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:45:33.891882       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:45:34.664155       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:45:34.664262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:45:36.884096       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:45:36.884198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:46:10.893410       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:46:10.893597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:46:10.918334       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:46:10.918430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:46:12.072340       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:46:12.072429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:46:14.073243       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:46:14.073355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:46:43.668161       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:46:43.668205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1008 17:46:44.810273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.618509ms"
	I1008 17:46:44.829009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.686938ms"
	I1008 17:46:44.829083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.25µs"
	I1008 17:46:44.839418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="64.022µs"
	I1008 17:46:46.593220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.15553ms"
	I1008 17:46:46.593669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.793µs"
	
	
	==> kube-proxy [2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 17:34:27.295287       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 17:34:27.320256       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E1008 17:34:27.320461       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 17:34:27.397583       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 17:34:27.397619       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 17:34:27.397642       1 server_linux.go:169] "Using iptables Proxier"
	I1008 17:34:27.401097       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 17:34:27.401400       1 server.go:483] "Version info" version="v1.31.1"
	I1008 17:34:27.401410       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 17:34:27.404350       1 config.go:199] "Starting service config controller"
	I1008 17:34:27.404360       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 17:34:27.404384       1 config.go:105] "Starting endpoint slice config controller"
	I1008 17:34:27.404565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 17:34:27.410322       1 config.go:328] "Starting node config controller"
	I1008 17:34:27.410346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 17:34:27.505060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 17:34:27.505103       1 shared_informer.go:320] Caches are synced for service config
	I1008 17:34:27.513590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671] <==
	W1008 17:34:18.593278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 17:34:18.593316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:18.594480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 17:34:18.597397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:18.597319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 17:34:18.597604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.401908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 17:34:19.401988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.414317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 17:34:19.414369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.570535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 17:34:19.571335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.698331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 17:34:19.698532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.713910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 17:34:19.713993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.808020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 17:34:19.809002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.808813       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 17:34:19.809164       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1008 17:34:19.812581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 17:34:19.812681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.892041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 17:34:19.892085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1008 17:34:22.674618       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.815625    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe86a2d5-d3af-4ca8-8c16-4a43b4d10a1b" containerName="volume-snapshot-controller"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816017    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70c956d8-0d97-477b-a407-7e74b8d53685" containerName="csi-resizer"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816063    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="hostpath"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816100    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="csi-provisioner"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816136    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="csi-snapshotter"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816168    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db6e092c-da8c-46ea-8e60-b2c9a91b4497" containerName="csi-attacher"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816198    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d0761a8-c6cb-4829-8a78-e7e1de94dba6" containerName="task-pv-container"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816229    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="liveness-probe"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816261    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e24c8dfd-265c-4e3a-82c3-41ce76e322f3" containerName="volume-snapshot-controller"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816292    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="csi-external-health-monitor-controller"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: E1008 17:46:44.816337    1202 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="node-driver-registrar"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816422    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="csi-snapshotter"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816460    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6e092c-da8c-46ea-8e60-b2c9a91b4497" containerName="csi-attacher"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816490    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="csi-provisioner"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816521    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe86a2d5-d3af-4ca8-8c16-4a43b4d10a1b" containerName="volume-snapshot-controller"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816551    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="csi-external-health-monitor-controller"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816587    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c956d8-0d97-477b-a407-7e74b8d53685" containerName="csi-resizer"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816621    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="node-driver-registrar"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816651    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="hostpath"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816682    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0761a8-c6cb-4829-8a78-e7e1de94dba6" containerName="task-pv-container"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816712    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24c8dfd-265c-4e3a-82c3-41ce76e322f3" containerName="volume-snapshot-controller"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.816752    1202 memory_manager.go:354] "RemoveStaleState removing state" podUID="64366d61-0edb-46a5-8813-2d30575552a2" containerName="liveness-probe"
	Oct 08 17:46:44 addons-738106 kubelet[1202]: I1008 17:46:44.915649    1202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhdzj\" (UniqueName: \"kubernetes.io/projected/66e7d107-14b4-456b-b417-ad6c6f92477a-kube-api-access-mhdzj\") pod \"hello-world-app-55bf9c44b4-hkkxb\" (UID: \"66e7d107-14b4-456b-b417-ad6c6f92477a\") " pod="default/hello-world-app-55bf9c44b4-hkkxb"
	Oct 08 17:46:46 addons-738106 kubelet[1202]: I1008 17:46:46.409121    1202 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 17:46:46 addons-738106 kubelet[1202]: I1008 17:46:46.579802    1202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-hkkxb" podStartSLOduration=1.884116108 podStartE2EDuration="2.579782245s" podCreationTimestamp="2024-10-08 17:46:44 +0000 UTC" firstStartedPulling="2024-10-08 17:46:45.374813349 +0000 UTC m=+744.124179378" lastFinishedPulling="2024-10-08 17:46:46.070479486 +0000 UTC m=+744.819845515" observedRunningTime="2024-10-08 17:46:46.579224466 +0000 UTC m=+745.328590494" watchObservedRunningTime="2024-10-08 17:46:46.579782245 +0000 UTC m=+745.329148292"
	
	
	==> storage-provisioner [6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c] <==
	I1008 17:34:32.964672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 17:34:32.984895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 17:34:32.985036       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 17:34:33.005181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 17:34:33.005977       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-738106_23dc17f6-b769-4717-9723-cd1dbd61450c!
	I1008 17:34:33.006030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"612e888d-f28a-40d1-a9dd-6b1dfcd905af", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-738106_23dc17f6-b769-4717-9723-cd1dbd61450c became leader
	I1008 17:34:33.107035       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-738106_23dc17f6-b769-4717-9723-cd1dbd61450c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-738106 -n addons-738106
helpers_test.go:261: (dbg) Run:  kubectl --context addons-738106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-zvmmq ingress-nginx-admission-patch-q8l6x
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-738106 describe pod busybox ingress-nginx-admission-create-zvmmq ingress-nginx-admission-patch-q8l6x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-738106 describe pod busybox ingress-nginx-admission-create-zvmmq ingress-nginx-admission-patch-q8l6x: exit status 1 (64.253404ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-738106/192.168.39.48
	Start Time:       Tue, 08 Oct 2024 17:35:51 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-62scb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-62scb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                           Age                   From               Message
	  ----     ------                           ----                  ----               -------
	  Normal   Scheduled                        10m                   default-scheduler  Successfully assigned default/busybox to addons-738106
	  Normal   Pulling                          9m25s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed                           9m25s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed                           9m25s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed                           9m11s (x6 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff                          5m41s (x21 over 10m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  FailedToRetrieveImagePullSecret  53s (x10 over 2m50s)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zvmmq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-q8l6x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-738106 describe pod busybox ingress-nginx-admission-create-zvmmq ingress-nginx-admission-patch-q8l6x: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 addons disable ingress-dns --alsologtostderr -v=1: (1.202742749s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 addons disable ingress --alsologtostderr -v=1: (7.676372413s)
--- FAIL: TestAddons/parallel/Ingress (151.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (359.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.639791ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-w72vc" [01f00ce3-494b-4d47-ab30-2439d417f6b6] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005291494s
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (77.887371ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 9m55.47370863s

                                                
                                                
** /stderr **
I1008 17:44:21.476865  537013 retry.go:31] will retry after 1.598330139s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (66.753875ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 9m57.140062368s

                                                
                                                
** /stderr **
I1008 17:44:23.142186  537013 retry.go:31] will retry after 3.673647498s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (68.769818ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 10m0.883649615s

                                                
                                                
** /stderr **
I1008 17:44:26.885881  537013 retry.go:31] will retry after 9.82077743s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (68.218753ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 10m10.773145717s

                                                
                                                
** /stderr **
I1008 17:44:36.775577  537013 retry.go:31] will retry after 12.695084798s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (64.009709ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 10m23.533257888s

                                                
                                                
** /stderr **
I1008 17:44:49.535561  537013 retry.go:31] will retry after 18.200290105s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (63.085465ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 10m41.798042741s

                                                
                                                
** /stderr **
I1008 17:45:07.800259  537013 retry.go:31] will retry after 18.01753984s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (64.129877ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 10m59.883396448s

                                                
                                                
** /stderr **
I1008 17:45:25.885532  537013 retry.go:31] will retry after 46.127373415s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (66.758708ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 11m46.078293989s

                                                
                                                
** /stderr **
I1008 17:46:12.080548  537013 retry.go:31] will retry after 43.782741319s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (62.317346ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 12m29.927642365s

                                                
                                                
** /stderr **
I1008 17:46:55.929931  537013 retry.go:31] will retry after 38.178421134s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (64.428381ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 13m8.172177995s

                                                
                                                
** /stderr **
I1008 17:47:34.174478  537013 retry.go:31] will retry after 1m21.079935855s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (65.156155ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 14m29.318608994s

                                                
                                                
** /stderr **
I1008 17:48:55.321123  537013 retry.go:31] will retry after 33.701779781s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (62.538736ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 15m3.085391332s

                                                
                                                
** /stderr **
I1008 17:49:29.087798  537013 retry.go:31] will retry after 44.741320867s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-738106 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-738106 top pods -n kube-system: exit status 1 (61.863472ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-4zs69, age: 15m47.890073833s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-738106 -n addons-738106
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 logs -n 25: (1.142689347s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-463465                                                                     | download-only-463465 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| delete  | -p download-only-691270                                                                     | download-only-691270 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-340266 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | binary-mirror-340266                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46361                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-340266                                                                     | binary-mirror-340266 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | addons-738106                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | addons-738106                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-738106 --wait=true                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:35 UTC | 08 Oct 24 17:35 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:43 UTC | 08 Oct 24 17:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | -p addons-738106                                                                            |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-738106 ssh cat                                                                       | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | /opt/local-path-provisioner/pvc-d1d617de-cc0c-4dd9-bd33-d96d94d0bb04_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | -p addons-738106                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-738106 ip                                                                            | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC | 08 Oct 24 17:44 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-738106 ssh curl -s                                                                   | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:45 UTC | 08 Oct 24 17:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-738106 addons                                                                        | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:45 UTC | 08 Oct 24 17:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-738106 ip                                                                            | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:46 UTC | 08 Oct 24 17:46 UTC |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:46 UTC | 08 Oct 24 17:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-738106 addons disable                                                                | addons-738106        | jenkins | v1.34.0 | 08 Oct 24 17:46 UTC | 08 Oct 24 17:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:33:40
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:33:40.689136  537626 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:33:40.689243  537626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:40.689253  537626 out.go:358] Setting ErrFile to fd 2...
	I1008 17:33:40.689257  537626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:40.689439  537626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:33:40.690035  537626 out.go:352] Setting JSON to false
	I1008 17:33:40.691113  537626 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4573,"bootTime":1728404248,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:33:40.691166  537626 start.go:139] virtualization: kvm guest
	I1008 17:33:40.693113  537626 out.go:177] * [addons-738106] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:33:40.694247  537626 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:33:40.694258  537626 notify.go:220] Checking for updates...
	I1008 17:33:40.696466  537626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:33:40.697696  537626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:33:40.698797  537626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:40.699881  537626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:33:40.700901  537626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:33:40.702093  537626 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:33:40.734458  537626 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 17:33:40.735521  537626 start.go:297] selected driver: kvm2
	I1008 17:33:40.735533  537626 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:33:40.735546  537626 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:33:40.736284  537626 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:33:40.736383  537626 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:33:40.751491  537626 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:33:40.751537  537626 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:33:40.751866  537626 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:33:40.751908  537626 cni.go:84] Creating CNI manager for ""
	I1008 17:33:40.751965  537626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:33:40.751994  537626 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 17:33:40.752073  537626 start.go:340] cluster config:
	{Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:33:40.752182  537626 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:33:40.753671  537626 out.go:177] * Starting "addons-738106" primary control-plane node in "addons-738106" cluster
	I1008 17:33:40.754857  537626 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:33:40.754885  537626 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 17:33:40.754899  537626 cache.go:56] Caching tarball of preloaded images
	I1008 17:33:40.754982  537626 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:33:40.754993  537626 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:33:40.755281  537626 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/config.json ...
	I1008 17:33:40.755299  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/config.json: {Name:mk595d6258b4a439716133f21c17ed4f412fe4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:33:40.755417  537626 start.go:360] acquireMachinesLock for addons-738106: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:33:40.755456  537626 start.go:364] duration metric: took 27.241µs to acquireMachinesLock for "addons-738106"
	I1008 17:33:40.755471  537626 start.go:93] Provisioning new machine with config: &{Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:33:40.755514  537626 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 17:33:40.757184  537626 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1008 17:33:40.757315  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:33:40.757363  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:33:40.771499  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I1008 17:33:40.771962  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:33:40.772564  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:33:40.772584  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:33:40.772964  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:33:40.773146  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:33:40.773312  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:33:40.773438  537626 start.go:159] libmachine.API.Create for "addons-738106" (driver="kvm2")
	I1008 17:33:40.773461  537626 client.go:168] LocalClient.Create starting
	I1008 17:33:40.773491  537626 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:33:41.005943  537626 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:33:41.410209  537626 main.go:141] libmachine: Running pre-create checks...
	I1008 17:33:41.410243  537626 main.go:141] libmachine: (addons-738106) Calling .PreCreateCheck
	I1008 17:33:41.410770  537626 main.go:141] libmachine: (addons-738106) Calling .GetConfigRaw
	I1008 17:33:41.411220  537626 main.go:141] libmachine: Creating machine...
	I1008 17:33:41.411239  537626 main.go:141] libmachine: (addons-738106) Calling .Create
	I1008 17:33:41.411358  537626 main.go:141] libmachine: (addons-738106) Creating KVM machine...
	I1008 17:33:41.412593  537626 main.go:141] libmachine: (addons-738106) DBG | found existing default KVM network
	I1008 17:33:41.413370  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.413228  537648 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1008 17:33:41.413411  537626 main.go:141] libmachine: (addons-738106) DBG | created network xml: 
	I1008 17:33:41.413425  537626 main.go:141] libmachine: (addons-738106) DBG | <network>
	I1008 17:33:41.413438  537626 main.go:141] libmachine: (addons-738106) DBG |   <name>mk-addons-738106</name>
	I1008 17:33:41.413450  537626 main.go:141] libmachine: (addons-738106) DBG |   <dns enable='no'/>
	I1008 17:33:41.413462  537626 main.go:141] libmachine: (addons-738106) DBG |   
	I1008 17:33:41.413474  537626 main.go:141] libmachine: (addons-738106) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 17:33:41.413489  537626 main.go:141] libmachine: (addons-738106) DBG |     <dhcp>
	I1008 17:33:41.413509  537626 main.go:141] libmachine: (addons-738106) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 17:33:41.413521  537626 main.go:141] libmachine: (addons-738106) DBG |     </dhcp>
	I1008 17:33:41.413536  537626 main.go:141] libmachine: (addons-738106) DBG |   </ip>
	I1008 17:33:41.413601  537626 main.go:141] libmachine: (addons-738106) DBG |   
	I1008 17:33:41.413643  537626 main.go:141] libmachine: (addons-738106) DBG | </network>
	I1008 17:33:41.413708  537626 main.go:141] libmachine: (addons-738106) DBG | 
	I1008 17:33:41.418980  537626 main.go:141] libmachine: (addons-738106) DBG | trying to create private KVM network mk-addons-738106 192.168.39.0/24...
	I1008 17:33:41.482290  537626 main.go:141] libmachine: (addons-738106) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106 ...
	I1008 17:33:41.482354  537626 main.go:141] libmachine: (addons-738106) DBG | private KVM network mk-addons-738106 192.168.39.0/24 created
	I1008 17:33:41.482379  537626 main.go:141] libmachine: (addons-738106) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:33:41.482406  537626 main.go:141] libmachine: (addons-738106) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:33:41.482424  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.482195  537648 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:41.752140  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.752000  537648 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa...
	I1008 17:33:41.904597  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.904464  537648 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/addons-738106.rawdisk...
	I1008 17:33:41.904628  537626 main.go:141] libmachine: (addons-738106) DBG | Writing magic tar header
	I1008 17:33:41.904639  537626 main.go:141] libmachine: (addons-738106) DBG | Writing SSH key tar header
	I1008 17:33:41.904647  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:41.904590  537648 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106 ...
	I1008 17:33:41.904704  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106
	I1008 17:33:41.904749  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:33:41.904762  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106 (perms=drwx------)
	I1008 17:33:41.904768  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:41.904779  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:33:41.904785  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:33:41.904791  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:33:41.904802  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:33:41.904825  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:33:41.904836  537626 main.go:141] libmachine: (addons-738106) DBG | Checking permissions on dir: /home
	I1008 17:33:41.904848  537626 main.go:141] libmachine: (addons-738106) DBG | Skipping /home - not owner
	I1008 17:33:41.904862  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:33:41.904871  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:33:41.904878  537626 main.go:141] libmachine: (addons-738106) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:33:41.904886  537626 main.go:141] libmachine: (addons-738106) Creating domain...
	I1008 17:33:41.905829  537626 main.go:141] libmachine: (addons-738106) define libvirt domain using xml: 
	I1008 17:33:41.905865  537626 main.go:141] libmachine: (addons-738106) <domain type='kvm'>
	I1008 17:33:41.905876  537626 main.go:141] libmachine: (addons-738106)   <name>addons-738106</name>
	I1008 17:33:41.905888  537626 main.go:141] libmachine: (addons-738106)   <memory unit='MiB'>4000</memory>
	I1008 17:33:41.905900  537626 main.go:141] libmachine: (addons-738106)   <vcpu>2</vcpu>
	I1008 17:33:41.905909  537626 main.go:141] libmachine: (addons-738106)   <features>
	I1008 17:33:41.905920  537626 main.go:141] libmachine: (addons-738106)     <acpi/>
	I1008 17:33:41.905928  537626 main.go:141] libmachine: (addons-738106)     <apic/>
	I1008 17:33:41.905938  537626 main.go:141] libmachine: (addons-738106)     <pae/>
	I1008 17:33:41.905955  537626 main.go:141] libmachine: (addons-738106)     
	I1008 17:33:41.905965  537626 main.go:141] libmachine: (addons-738106)   </features>
	I1008 17:33:41.905977  537626 main.go:141] libmachine: (addons-738106)   <cpu mode='host-passthrough'>
	I1008 17:33:41.905992  537626 main.go:141] libmachine: (addons-738106)   
	I1008 17:33:41.906007  537626 main.go:141] libmachine: (addons-738106)   </cpu>
	I1008 17:33:41.906113  537626 main.go:141] libmachine: (addons-738106)   <os>
	I1008 17:33:41.906176  537626 main.go:141] libmachine: (addons-738106)     <type>hvm</type>
	I1008 17:33:41.906195  537626 main.go:141] libmachine: (addons-738106)     <boot dev='cdrom'/>
	I1008 17:33:41.906205  537626 main.go:141] libmachine: (addons-738106)     <boot dev='hd'/>
	I1008 17:33:41.906217  537626 main.go:141] libmachine: (addons-738106)     <bootmenu enable='no'/>
	I1008 17:33:41.906223  537626 main.go:141] libmachine: (addons-738106)   </os>
	I1008 17:33:41.906231  537626 main.go:141] libmachine: (addons-738106)   <devices>
	I1008 17:33:41.906242  537626 main.go:141] libmachine: (addons-738106)     <disk type='file' device='cdrom'>
	I1008 17:33:41.906260  537626 main.go:141] libmachine: (addons-738106)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/boot2docker.iso'/>
	I1008 17:33:41.906271  537626 main.go:141] libmachine: (addons-738106)       <target dev='hdc' bus='scsi'/>
	I1008 17:33:41.906282  537626 main.go:141] libmachine: (addons-738106)       <readonly/>
	I1008 17:33:41.906291  537626 main.go:141] libmachine: (addons-738106)     </disk>
	I1008 17:33:41.906303  537626 main.go:141] libmachine: (addons-738106)     <disk type='file' device='disk'>
	I1008 17:33:41.906314  537626 main.go:141] libmachine: (addons-738106)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:33:41.906364  537626 main.go:141] libmachine: (addons-738106)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/addons-738106.rawdisk'/>
	I1008 17:33:41.906384  537626 main.go:141] libmachine: (addons-738106)       <target dev='hda' bus='virtio'/>
	I1008 17:33:41.906393  537626 main.go:141] libmachine: (addons-738106)     </disk>
	I1008 17:33:41.906398  537626 main.go:141] libmachine: (addons-738106)     <interface type='network'>
	I1008 17:33:41.906407  537626 main.go:141] libmachine: (addons-738106)       <source network='mk-addons-738106'/>
	I1008 17:33:41.906412  537626 main.go:141] libmachine: (addons-738106)       <model type='virtio'/>
	I1008 17:33:41.906418  537626 main.go:141] libmachine: (addons-738106)     </interface>
	I1008 17:33:41.906422  537626 main.go:141] libmachine: (addons-738106)     <interface type='network'>
	I1008 17:33:41.906430  537626 main.go:141] libmachine: (addons-738106)       <source network='default'/>
	I1008 17:33:41.906437  537626 main.go:141] libmachine: (addons-738106)       <model type='virtio'/>
	I1008 17:33:41.906443  537626 main.go:141] libmachine: (addons-738106)     </interface>
	I1008 17:33:41.906460  537626 main.go:141] libmachine: (addons-738106)     <serial type='pty'>
	I1008 17:33:41.906468  537626 main.go:141] libmachine: (addons-738106)       <target port='0'/>
	I1008 17:33:41.906473  537626 main.go:141] libmachine: (addons-738106)     </serial>
	I1008 17:33:41.906483  537626 main.go:141] libmachine: (addons-738106)     <console type='pty'>
	I1008 17:33:41.906488  537626 main.go:141] libmachine: (addons-738106)       <target type='serial' port='0'/>
	I1008 17:33:41.906493  537626 main.go:141] libmachine: (addons-738106)     </console>
	I1008 17:33:41.906499  537626 main.go:141] libmachine: (addons-738106)     <rng model='virtio'>
	I1008 17:33:41.906505  537626 main.go:141] libmachine: (addons-738106)       <backend model='random'>/dev/random</backend>
	I1008 17:33:41.906511  537626 main.go:141] libmachine: (addons-738106)     </rng>
	I1008 17:33:41.906516  537626 main.go:141] libmachine: (addons-738106)     
	I1008 17:33:41.906520  537626 main.go:141] libmachine: (addons-738106)     
	I1008 17:33:41.906525  537626 main.go:141] libmachine: (addons-738106)   </devices>
	I1008 17:33:41.906530  537626 main.go:141] libmachine: (addons-738106) </domain>
	I1008 17:33:41.906538  537626 main.go:141] libmachine: (addons-738106) 
	I1008 17:33:41.912401  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:55:8d:d9 in network default
	I1008 17:33:41.912967  537626 main.go:141] libmachine: (addons-738106) Ensuring networks are active...
	I1008 17:33:41.912984  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:41.913739  537626 main.go:141] libmachine: (addons-738106) Ensuring network default is active
	I1008 17:33:41.914048  537626 main.go:141] libmachine: (addons-738106) Ensuring network mk-addons-738106 is active
	I1008 17:33:41.914535  537626 main.go:141] libmachine: (addons-738106) Getting domain xml...
	I1008 17:33:41.915123  537626 main.go:141] libmachine: (addons-738106) Creating domain...
	I1008 17:33:43.279998  537626 main.go:141] libmachine: (addons-738106) Waiting to get IP...
	I1008 17:33:43.280697  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:43.281638  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:43.281778  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:43.281662  537648 retry.go:31] will retry after 280.838427ms: waiting for machine to come up
	I1008 17:33:43.563864  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:43.564296  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:43.564318  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:43.564251  537648 retry.go:31] will retry after 296.09476ms: waiting for machine to come up
	I1008 17:33:43.861843  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:43.862339  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:43.862368  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:43.862270  537648 retry.go:31] will retry after 332.461301ms: waiting for machine to come up
	I1008 17:33:44.196957  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:44.197420  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:44.197448  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:44.197391  537648 retry.go:31] will retry after 526.383574ms: waiting for machine to come up
	I1008 17:33:44.725015  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:44.725401  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:44.725429  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:44.725345  537648 retry.go:31] will retry after 538.672431ms: waiting for machine to come up
	I1008 17:33:45.266158  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:45.266580  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:45.266610  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:45.266527  537648 retry.go:31] will retry after 900.712695ms: waiting for machine to come up
	I1008 17:33:46.169489  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:46.169891  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:46.169923  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:46.169834  537648 retry.go:31] will retry after 1.143660308s: waiting for machine to come up
	I1008 17:33:47.315050  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:47.315428  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:47.315460  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:47.315375  537648 retry.go:31] will retry after 1.073047933s: waiting for machine to come up
	I1008 17:33:48.390588  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:48.390944  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:48.390988  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:48.390915  537648 retry.go:31] will retry after 1.696404496s: waiting for machine to come up
	I1008 17:33:50.089745  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:50.090140  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:50.090162  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:50.090101  537648 retry.go:31] will retry after 1.509226141s: waiting for machine to come up
	I1008 17:33:51.600783  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:51.601284  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:51.601315  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:51.601244  537648 retry.go:31] will retry after 1.977893914s: waiting for machine to come up
	I1008 17:33:53.581353  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:53.581850  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:53.581879  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:53.581798  537648 retry.go:31] will retry after 2.977291089s: waiting for machine to come up
	I1008 17:33:56.560180  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:33:56.560606  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:33:56.560635  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:33:56.560558  537648 retry.go:31] will retry after 3.871394004s: waiting for machine to come up
	I1008 17:34:00.433827  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:00.434188  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find current IP address of domain addons-738106 in network mk-addons-738106
	I1008 17:34:00.434229  537626 main.go:141] libmachine: (addons-738106) DBG | I1008 17:34:00.434156  537648 retry.go:31] will retry after 4.107122672s: waiting for machine to come up
	I1008 17:34:04.545293  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.545823  537626 main.go:141] libmachine: (addons-738106) Found IP for machine: 192.168.39.48
	I1008 17:34:04.545842  537626 main.go:141] libmachine: (addons-738106) Reserving static IP address...
	I1008 17:34:04.545854  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has current primary IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.546187  537626 main.go:141] libmachine: (addons-738106) DBG | unable to find host DHCP lease matching {name: "addons-738106", mac: "52:54:00:4c:47:63", ip: "192.168.39.48"} in network mk-addons-738106
	I1008 17:34:04.614821  537626 main.go:141] libmachine: (addons-738106) DBG | Getting to WaitForSSH function...
	I1008 17:34:04.614857  537626 main.go:141] libmachine: (addons-738106) Reserved static IP address: 192.168.39.48
	I1008 17:34:04.614870  537626 main.go:141] libmachine: (addons-738106) Waiting for SSH to be available...
	I1008 17:34:04.617262  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.617651  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.617688  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.617832  537626 main.go:141] libmachine: (addons-738106) DBG | Using SSH client type: external
	I1008 17:34:04.617857  537626 main.go:141] libmachine: (addons-738106) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa (-rw-------)
	I1008 17:34:04.617889  537626 main.go:141] libmachine: (addons-738106) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:34:04.617908  537626 main.go:141] libmachine: (addons-738106) DBG | About to run SSH command:
	I1008 17:34:04.617943  537626 main.go:141] libmachine: (addons-738106) DBG | exit 0
	I1008 17:34:04.746027  537626 main.go:141] libmachine: (addons-738106) DBG | SSH cmd err, output: <nil>: 
	I1008 17:34:04.746254  537626 main.go:141] libmachine: (addons-738106) KVM machine creation complete!
	I1008 17:34:04.746651  537626 main.go:141] libmachine: (addons-738106) Calling .GetConfigRaw
	I1008 17:34:04.747217  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:04.747408  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:04.747593  537626 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:34:04.747611  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:04.748868  537626 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:34:04.748883  537626 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:34:04.748891  537626 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:34:04.748899  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:04.750925  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.751259  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.751291  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.751400  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:04.751603  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.751761  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.751899  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:04.752053  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:04.752290  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:04.752304  537626 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:34:04.853192  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:34:04.853219  537626 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:34:04.853227  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:04.855866  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.856174  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.856214  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.856387  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:04.856566  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.856732  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.856912  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:04.857062  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:04.857277  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:04.857293  537626 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:34:04.958748  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:34:04.958839  537626 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:34:04.958854  537626 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:34:04.958869  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:34:04.959108  537626 buildroot.go:166] provisioning hostname "addons-738106"
	I1008 17:34:04.959134  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:34:04.959328  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:04.961843  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.962210  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:04.962244  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:04.962401  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:04.962557  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.962687  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:04.962791  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:04.962903  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:04.963117  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:04.963135  537626 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-738106 && echo "addons-738106" | sudo tee /etc/hostname
	I1008 17:34:05.075384  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-738106
	
	I1008 17:34:05.075419  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.077767  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.078103  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.078131  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.078311  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.078501  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.078663  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.078743  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.078877  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:05.079079  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:05.079096  537626 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-738106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-738106/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-738106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:34:05.186157  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:34:05.186192  537626 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:34:05.186225  537626 buildroot.go:174] setting up certificates
	I1008 17:34:05.186240  537626 provision.go:84] configureAuth start
	I1008 17:34:05.186255  537626 main.go:141] libmachine: (addons-738106) Calling .GetMachineName
	I1008 17:34:05.186545  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:05.189184  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.189567  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.189606  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.189693  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.191890  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.192196  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.192221  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.192357  537626 provision.go:143] copyHostCerts
	I1008 17:34:05.192436  537626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:34:05.192558  537626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:34:05.192617  537626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:34:05.192695  537626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.addons-738106 san=[127.0.0.1 192.168.39.48 addons-738106 localhost minikube]
	I1008 17:34:05.349238  537626 provision.go:177] copyRemoteCerts
	I1008 17:34:05.349305  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:34:05.349332  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.352101  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.352407  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.352435  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.352609  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.352768  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.352908  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.353013  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.432190  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:34:05.454658  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:34:05.476764  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:34:05.498637  537626 provision.go:87] duration metric: took 312.381796ms to configureAuth
	I1008 17:34:05.498661  537626 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:34:05.498828  537626 config.go:182] Loaded profile config "addons-738106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:34:05.498928  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.501510  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.501847  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.501879  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.502016  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.502201  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.502352  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.502489  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.502695  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:05.502859  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:05.502874  537626 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:34:05.711457  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:34:05.711491  537626 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:34:05.711500  537626 main.go:141] libmachine: (addons-738106) Calling .GetURL
	I1008 17:34:05.712784  537626 main.go:141] libmachine: (addons-738106) DBG | Using libvirt version 6000000
	I1008 17:34:05.715240  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.715550  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.715575  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.715711  537626 main.go:141] libmachine: Docker is up and running!
	I1008 17:34:05.715722  537626 main.go:141] libmachine: Reticulating splines...
	I1008 17:34:05.715731  537626 client.go:171] duration metric: took 24.942259489s to LocalClient.Create
	I1008 17:34:05.715755  537626 start.go:167] duration metric: took 24.942316943s to libmachine.API.Create "addons-738106"
	I1008 17:34:05.715768  537626 start.go:293] postStartSetup for "addons-738106" (driver="kvm2")
	I1008 17:34:05.715782  537626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:34:05.715802  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.716060  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:34:05.716097  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.718151  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.718501  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.718530  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.718687  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.718861  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.719030  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.719174  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.800698  537626 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:34:05.804645  537626 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:34:05.804672  537626 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:34:05.804747  537626 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:34:05.804769  537626 start.go:296] duration metric: took 88.995336ms for postStartSetup
	I1008 17:34:05.804817  537626 main.go:141] libmachine: (addons-738106) Calling .GetConfigRaw
	I1008 17:34:05.805432  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:05.807893  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.808299  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.808326  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.808574  537626 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/config.json ...
	I1008 17:34:05.808776  537626 start.go:128] duration metric: took 25.053251682s to createHost
	I1008 17:34:05.808805  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.811112  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.811413  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.811439  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.811627  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.811791  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.811960  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.812118  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.812258  537626 main.go:141] libmachine: Using SSH client type: native
	I1008 17:34:05.812429  537626 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I1008 17:34:05.812439  537626 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:34:05.910631  537626 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728408845.886453669
	
	I1008 17:34:05.910659  537626 fix.go:216] guest clock: 1728408845.886453669
	I1008 17:34:05.910669  537626 fix.go:229] Guest: 2024-10-08 17:34:05.886453669 +0000 UTC Remote: 2024-10-08 17:34:05.80879367 +0000 UTC m=+25.157788476 (delta=77.659999ms)
	I1008 17:34:05.910691  537626 fix.go:200] guest clock delta is within tolerance: 77.659999ms
	I1008 17:34:05.910697  537626 start.go:83] releasing machines lock for "addons-738106", held for 25.155232261s
	I1008 17:34:05.910725  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.911029  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:05.913440  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.913748  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.913774  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.913968  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.914426  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.914581  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:05.914689  537626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:34:05.914737  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.914775  537626 ssh_runner.go:195] Run: cat /version.json
	I1008 17:34:05.914803  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:05.917231  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917497  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917612  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.917644  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917884  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:05.917910  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:05.917936  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.918065  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:05.918118  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.918268  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.918285  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:05.918421  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:05.918436  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.918570  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:05.991240  537626 ssh_runner.go:195] Run: systemctl --version
	I1008 17:34:06.014066  537626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:34:06.170003  537626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:34:06.176190  537626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:34:06.176269  537626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:34:06.192224  537626 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:34:06.192243  537626 start.go:495] detecting cgroup driver to use...
	I1008 17:34:06.192307  537626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:34:06.208351  537626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:34:06.221631  537626 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:34:06.221735  537626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:34:06.234985  537626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:34:06.247848  537626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:34:06.361058  537626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:34:06.510411  537626 docker.go:233] disabling docker service ...
	I1008 17:34:06.510505  537626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:34:06.523563  537626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:34:06.536132  537626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:34:06.651508  537626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:34:06.764205  537626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:34:06.777440  537626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:34:06.795381  537626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:34:06.795459  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.805419  537626 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:34:06.805488  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.815187  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.824538  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.833890  537626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:34:06.843452  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.852855  537626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.868678  537626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:34:06.878074  537626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:34:06.886541  537626 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:34:06.886583  537626 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:34:06.898471  537626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:34:06.906877  537626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:34:07.020732  537626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:34:07.114574  537626 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:34:07.114655  537626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:34:07.119392  537626 start.go:563] Will wait 60s for crictl version
	I1008 17:34:07.119450  537626 ssh_runner.go:195] Run: which crictl
	I1008 17:34:07.123099  537626 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:34:07.168996  537626 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:34:07.169095  537626 ssh_runner.go:195] Run: crio --version
	I1008 17:34:07.200572  537626 ssh_runner.go:195] Run: crio --version
	I1008 17:34:07.228827  537626 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:34:07.230289  537626 main.go:141] libmachine: (addons-738106) Calling .GetIP
	I1008 17:34:07.232823  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:07.233181  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:07.233212  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:07.233381  537626 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:34:07.237443  537626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:34:07.249431  537626 kubeadm.go:883] updating cluster {Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 17:34:07.249554  537626 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:34:07.249617  537626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:34:07.279917  537626 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 17:34:07.279996  537626 ssh_runner.go:195] Run: which lz4
	I1008 17:34:07.283943  537626 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 17:34:07.287802  537626 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 17:34:07.287824  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 17:34:08.497093  537626 crio.go:462] duration metric: took 1.213200062s to copy over tarball
	I1008 17:34:08.497163  537626 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 17:34:10.559838  537626 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.062646653s)
	I1008 17:34:10.559874  537626 crio.go:469] duration metric: took 2.062749764s to extract the tarball
	I1008 17:34:10.559885  537626 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 17:34:10.596900  537626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:34:10.636232  537626 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 17:34:10.636259  537626 cache_images.go:84] Images are preloaded, skipping loading
	I1008 17:34:10.636298  537626 kubeadm.go:934] updating node { 192.168.39.48 8443 v1.31.1 crio true true} ...
	I1008 17:34:10.636438  537626 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-738106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:34:10.636529  537626 ssh_runner.go:195] Run: crio config
	I1008 17:34:10.680707  537626 cni.go:84] Creating CNI manager for ""
	I1008 17:34:10.680732  537626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:34:10.680757  537626 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 17:34:10.680791  537626 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-738106 NodeName:addons-738106 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 17:34:10.680942  537626 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-738106"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 17:34:10.681020  537626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:34:10.690845  537626 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 17:34:10.690917  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 17:34:10.700048  537626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 17:34:10.716022  537626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:34:10.731674  537626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1008 17:34:10.747105  537626 ssh_runner.go:195] Run: grep 192.168.39.48	control-plane.minikube.internal$ /etc/hosts
	I1008 17:34:10.750695  537626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:34:10.762251  537626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:34:10.873308  537626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:34:10.890510  537626 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106 for IP: 192.168.39.48
	I1008 17:34:10.890544  537626 certs.go:194] generating shared ca certs ...
	I1008 17:34:10.890579  537626 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:10.890758  537626 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:34:10.976005  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt ...
	I1008 17:34:10.976040  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt: {Name:mk2e03f13a61c15f4a04d301f8782221fad00d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:10.976213  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key ...
	I1008 17:34:10.976224  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key: {Name:mk3e6571165dc2f41e24b21c47ec4b378152c3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:10.976294  537626 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:34:11.070506  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt ...
	I1008 17:34:11.070539  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt: {Name:mkbdb588abd4e5f892ee88285210baf17ac68d59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.070694  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key ...
	I1008 17:34:11.070707  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key: {Name:mk2b9c3a9084dcbe12cc25abe16ba6ffe6e02f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.070777  537626 certs.go:256] generating profile certs ...
	I1008 17:34:11.070834  537626 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.key
	I1008 17:34:11.070857  537626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt with IP's: []
	I1008 17:34:11.127410  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt ...
	I1008 17:34:11.127442  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: {Name:mk059e29262c9e19b9ef00ba4b05c9a99e65ddfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.127592  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.key ...
	I1008 17:34:11.127602  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.key: {Name:mk3262ac206d5297ba8efeeb5c541edbb0aa34f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.127668  537626 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5
	I1008 17:34:11.127686  537626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.48]
	I1008 17:34:11.409390  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5 ...
	I1008 17:34:11.409429  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5: {Name:mk5d3287da65c1ac0657d6c2bda0130ed40c5006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.409605  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5 ...
	I1008 17:34:11.409618  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5: {Name:mk39fdd4deaa631d7548b40f45b39a8aec584738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.409699  537626 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt.c939c9f5 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt
	I1008 17:34:11.409789  537626 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key.c939c9f5 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key
	I1008 17:34:11.409835  537626 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key
	I1008 17:34:11.409854  537626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt with IP's: []
	I1008 17:34:11.473382  537626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt ...
	I1008 17:34:11.473413  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt: {Name:mke96fb5cd120bb380ed9b3bc0b2f6a63aba040f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.473571  537626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key ...
	I1008 17:34:11.473585  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key: {Name:mkf937bb5f1d51c1f200451b4b42e7fde440243a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:11.473747  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:34:11.473781  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:34:11.473808  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:34:11.473830  537626 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:34:11.474443  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:34:11.498473  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:34:11.520378  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:34:11.542036  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:34:11.572645  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 17:34:11.607203  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 17:34:11.629739  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:34:11.651481  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:34:11.673055  537626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:34:11.694095  537626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 17:34:11.709388  537626 ssh_runner.go:195] Run: openssl version
	I1008 17:34:11.714710  537626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:34:11.724661  537626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:34:11.728719  537626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:34:11.728778  537626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:34:11.734186  537626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:34:11.743910  537626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:34:11.747766  537626 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:34:11.747820  537626 kubeadm.go:392] StartCluster: {Name:addons-738106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-738106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:34:11.747896  537626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 17:34:11.747958  537626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 17:34:11.781773  537626 cri.go:89] found id: ""
	I1008 17:34:11.781859  537626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 17:34:11.791551  537626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 17:34:11.801184  537626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 17:34:11.810432  537626 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 17:34:11.810456  537626 kubeadm.go:157] found existing configuration files:
	
	I1008 17:34:11.810506  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 17:34:11.819190  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 17:34:11.819271  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 17:34:11.828414  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 17:34:11.837327  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 17:34:11.837396  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 17:34:11.846202  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 17:34:11.854625  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 17:34:11.854668  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 17:34:11.863149  537626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 17:34:11.871421  537626 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 17:34:11.871469  537626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 17:34:11.880164  537626 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 17:34:11.929272  537626 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 17:34:11.929470  537626 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 17:34:12.031679  537626 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 17:34:12.031811  537626 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 17:34:12.031952  537626 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 17:34:12.043199  537626 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 17:34:12.045397  537626 out.go:235]   - Generating certificates and keys ...
	I1008 17:34:12.045520  537626 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 17:34:12.045632  537626 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 17:34:12.089991  537626 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 17:34:12.400933  537626 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 17:34:12.447240  537626 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 17:34:12.575099  537626 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 17:34:12.770280  537626 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 17:34:12.770473  537626 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-738106 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1008 17:34:12.871630  537626 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 17:34:12.871919  537626 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-738106 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I1008 17:34:12.966016  537626 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 17:34:13.568473  537626 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 17:34:13.679771  537626 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 17:34:13.680026  537626 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 17:34:13.875389  537626 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 17:34:13.996093  537626 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 17:34:14.196895  537626 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 17:34:14.370849  537626 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 17:34:14.486072  537626 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 17:34:14.486751  537626 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 17:34:14.489256  537626 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 17:34:14.490956  537626 out.go:235]   - Booting up control plane ...
	I1008 17:34:14.491039  537626 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 17:34:14.491109  537626 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 17:34:14.491627  537626 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 17:34:14.507235  537626 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 17:34:14.513825  537626 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 17:34:14.513894  537626 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 17:34:14.648900  537626 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 17:34:14.649067  537626 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 17:34:15.150335  537626 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.811507ms
	I1008 17:34:15.150438  537626 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 17:34:20.648700  537626 kubeadm.go:310] [api-check] The API server is healthy after 5.501451413s
	I1008 17:34:20.668633  537626 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 17:34:20.678297  537626 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 17:34:20.703405  537626 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 17:34:20.703601  537626 kubeadm.go:310] [mark-control-plane] Marking the node addons-738106 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 17:34:20.713827  537626 kubeadm.go:310] [bootstrap-token] Using token: ijcjf0.l7d52rdo1tzhu6v1
	I1008 17:34:20.715143  537626 out.go:235]   - Configuring RBAC rules ...
	I1008 17:34:20.715273  537626 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 17:34:20.723107  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 17:34:20.730590  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 17:34:20.733668  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 17:34:20.736581  537626 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 17:34:20.740820  537626 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 17:34:21.055724  537626 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 17:34:21.512687  537626 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 17:34:22.053335  537626 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 17:34:22.054193  537626 kubeadm.go:310] 
	I1008 17:34:22.054266  537626 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 17:34:22.054273  537626 kubeadm.go:310] 
	I1008 17:34:22.054371  537626 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 17:34:22.054399  537626 kubeadm.go:310] 
	I1008 17:34:22.054453  537626 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 17:34:22.054540  537626 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 17:34:22.054631  537626 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 17:34:22.054652  537626 kubeadm.go:310] 
	I1008 17:34:22.054730  537626 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 17:34:22.054766  537626 kubeadm.go:310] 
	I1008 17:34:22.054854  537626 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 17:34:22.054870  537626 kubeadm.go:310] 
	I1008 17:34:22.054949  537626 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 17:34:22.055091  537626 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 17:34:22.055213  537626 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 17:34:22.055231  537626 kubeadm.go:310] 
	I1008 17:34:22.055348  537626 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 17:34:22.055451  537626 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 17:34:22.055462  537626 kubeadm.go:310] 
	I1008 17:34:22.055583  537626 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ijcjf0.l7d52rdo1tzhu6v1 \
	I1008 17:34:22.055722  537626 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 17:34:22.055761  537626 kubeadm.go:310] 	--control-plane 
	I1008 17:34:22.055770  537626 kubeadm.go:310] 
	I1008 17:34:22.055900  537626 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 17:34:22.055919  537626 kubeadm.go:310] 
	I1008 17:34:22.056022  537626 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ijcjf0.l7d52rdo1tzhu6v1 \
	I1008 17:34:22.056163  537626 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 17:34:22.057373  537626 kubeadm.go:310] W1008 17:34:11.909094     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:34:22.057626  537626 kubeadm.go:310] W1008 17:34:11.909971     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:34:22.057741  537626 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 17:34:22.057782  537626 cni.go:84] Creating CNI manager for ""
	I1008 17:34:22.057796  537626 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:34:22.059536  537626 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 17:34:22.060703  537626 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 17:34:22.071101  537626 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 17:34:22.094582  537626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 17:34:22.094680  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:22.094692  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-738106 minikube.k8s.io/updated_at=2024_10_08T17_34_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=addons-738106 minikube.k8s.io/primary=true
	I1008 17:34:22.237739  537626 ops.go:34] apiserver oom_adj: -16
	I1008 17:34:22.237907  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:22.738839  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:23.238428  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:23.738842  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:24.238435  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:24.738905  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:25.238799  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:25.738870  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:26.238792  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:26.738587  537626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:34:26.849091  537626 kubeadm.go:1113] duration metric: took 4.754486007s to wait for elevateKubeSystemPrivileges
	I1008 17:34:26.849135  537626 kubeadm.go:394] duration metric: took 15.101320067s to StartCluster
	I1008 17:34:26.849161  537626 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:26.849312  537626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:34:26.849837  537626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:34:26.850093  537626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 17:34:26.850086  537626 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:34:26.850113  537626 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1008 17:34:26.850239  537626 addons.go:69] Setting yakd=true in profile "addons-738106"
	I1008 17:34:26.850278  537626 addons.go:69] Setting registry=true in profile "addons-738106"
	I1008 17:34:26.850289  537626 addons.go:69] Setting ingress=true in profile "addons-738106"
	I1008 17:34:26.850294  537626 addons.go:234] Setting addon yakd=true in "addons-738106"
	I1008 17:34:26.850303  537626 addons.go:234] Setting addon registry=true in "addons-738106"
	I1008 17:34:26.850306  537626 config.go:182] Loaded profile config "addons-738106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:34:26.850312  537626 addons.go:69] Setting ingress-dns=true in profile "addons-738106"
	I1008 17:34:26.850333  537626 addons.go:234] Setting addon ingress-dns=true in "addons-738106"
	I1008 17:34:26.850342  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850354  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850366  537626 addons.go:69] Setting volcano=true in profile "addons-738106"
	I1008 17:34:26.850377  537626 addons.go:234] Setting addon volcano=true in "addons-738106"
	I1008 17:34:26.850388  537626 addons.go:69] Setting inspektor-gadget=true in profile "addons-738106"
	I1008 17:34:26.850400  537626 addons.go:234] Setting addon inspektor-gadget=true in "addons-738106"
	I1008 17:34:26.850379  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850424  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850484  537626 addons.go:69] Setting volumesnapshots=true in profile "addons-738106"
	I1008 17:34:26.850508  537626 addons.go:234] Setting addon volumesnapshots=true in "addons-738106"
	I1008 17:34:26.850542  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850861  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.850252  537626 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-738106"
	I1008 17:34:26.850880  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.850885  537626 addons.go:69] Setting storage-provisioner=true in profile "addons-738106"
	I1008 17:34:26.850404  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850900  537626 addons.go:234] Setting addon storage-provisioner=true in "addons-738106"
	I1008 17:34:26.850907  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850915  537626 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-738106"
	I1008 17:34:26.850922  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.850929  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.850922  537626 addons.go:69] Setting metrics-server=true in profile "addons-738106"
	I1008 17:34:26.850251  537626 addons.go:69] Setting cloud-spanner=true in profile "addons-738106"
	I1008 17:34:26.850951  537626 addons.go:234] Setting addon metrics-server=true in "addons-738106"
	I1008 17:34:26.850955  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850957  537626 addons.go:234] Setting addon cloud-spanner=true in "addons-738106"
	I1008 17:34:26.850937  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851014  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851138  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851248  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851267  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851293  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850272  537626 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-738106"
	I1008 17:34:26.851314  537626 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-738106"
	I1008 17:34:26.850869  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851337  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851299  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851346  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850281  537626 addons.go:69] Setting gcp-auth=true in profile "addons-738106"
	I1008 17:34:26.850240  537626 addons.go:69] Setting default-storageclass=true in profile "addons-738106"
	I1008 17:34:26.851422  537626 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-738106"
	I1008 17:34:26.851424  537626 mustload.go:65] Loading cluster: addons-738106
	I1008 17:34:26.850267  537626 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-738106"
	I1008 17:34:26.851441  537626 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-738106"
	I1008 17:34:26.851500  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.851601  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851661  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851718  537626 config.go:182] Loaded profile config "addons-738106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:34:26.851312  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851770  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.850304  537626 addons.go:234] Setting addon ingress=true in "addons-738106"
	I1008 17:34:26.851873  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851910  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.851968  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.851989  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852039  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852047  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852062  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852082  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852095  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852118  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852125  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.852131  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.852154  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.852285  537626 out.go:177] * Verifying Kubernetes components...
	I1008 17:34:26.853752  537626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:34:26.871278  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I1008 17:34:26.871325  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 17:34:26.871547  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I1008 17:34:26.871550  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I1008 17:34:26.871998  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872063  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872193  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872562  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.872583  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.872691  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.872916  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.872966  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.872997  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.873148  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.873161  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.873597  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.873626  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.873783  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.873842  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I1008 17:34:26.886884  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.886932  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.888123  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.888164  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.888199  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.888360  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.888382  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.888473  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I1008 17:34:26.889034  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.889048  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.889075  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.889117  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.889199  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.889599  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.889618  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.890023  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.890050  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.890541  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.890567  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.890655  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.891173  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.891215  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.891420  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.898767  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.898813  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.918340  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I1008 17:34:26.919105  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.919994  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.920019  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.920462  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.920521  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1008 17:34:26.920851  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.921075  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.921662  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.921680  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.922081  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.922413  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.924131  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I1008 17:34:26.924372  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.924690  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.925509  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.925530  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.926007  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.926471  537626 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1008 17:34:26.926645  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.926710  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I1008 17:34:26.927861  537626 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1008 17:34:26.927886  537626 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1008 17:34:26.927916  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.928808  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
	I1008 17:34:26.928817  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I1008 17:34:26.928842  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1008 17:34:26.928862  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.928808  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.929261  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.929308  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.929342  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.929761  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.929788  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.929805  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.929855  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.929869  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.930195  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.930258  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.930270  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.930282  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.930683  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.930716  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.930787  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.930804  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.931394  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.931406  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.931464  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.931854  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.931896  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.932093  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1008 17:34:26.932791  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.932829  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.932947  537626 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1008 17:34:26.932988  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.933832  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.933869  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.934087  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.934112  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.934128  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.934247  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.934259  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.934330  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.934380  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I1008 17:34:26.934527  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.934680  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.934742  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.934812  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.935117  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.935255  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.935287  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.935529  537626 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 17:34:26.935547  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1008 17:34:26.935566  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.935662  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.935910  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.937528  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.940087  537626 addons.go:234] Setting addon default-storageclass=true in "addons-738106"
	I1008 17:34:26.940136  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.940491  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.940523  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.941770  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1008 17:34:26.941919  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I1008 17:34:26.942071  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.942607  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.942553  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.942750  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.942945  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1008 17:34:26.942962  537626 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1008 17:34:26.942982  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.943615  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.943640  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.943653  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.943806  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.943913  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.943952  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.943987  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.944217  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.944751  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.945023  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.946305  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.946724  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.946751  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.946999  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.947340  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.947488  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.947657  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.949912  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I1008 17:34:26.952200  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I1008 17:34:26.952847  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.953389  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.953408  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.954468  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.954737  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.956357  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.957235  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I1008 17:34:26.957412  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38913
	I1008 17:34:26.957933  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.958411  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.958416  537626 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 17:34:26.958605  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.958630  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.958895  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.958915  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.959129  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.959192  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I1008 17:34:26.959501  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.959858  537626 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:34:26.959878  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 17:34:26.959896  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.960343  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.960909  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.960929  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.961427  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.961763  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.962383  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1008 17:34:26.962879  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.963331  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.963348  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.963703  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.963889  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.964095  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.965434  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I1008 17:34:26.965459  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:26.965493  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:26.965437  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.965849  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.965942  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:26.965947  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:26.965957  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:26.965965  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:26.965971  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:26.965973  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.965988  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.966139  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.966221  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:26.966250  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:26.966257  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	W1008 17:34:26.966381  537626 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1008 17:34:26.966709  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.966746  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.967026  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.967029  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.967064  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.968363  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.968374  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.968420  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.968565  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.968760  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.968835  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.969949  537626 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1008 17:34:26.971014  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 17:34:26.971035  537626 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 17:34:26.971056  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.971139  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.971199  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
	I1008 17:34:26.971700  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.972244  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.972261  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.972619  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1008 17:34:26.972766  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.973056  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.973075  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I1008 17:34:26.973669  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.974189  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.974206  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.974715  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.974858  537626 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-738106"
	I1008 17:34:26.974909  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:26.975078  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.975266  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.975311  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.975382  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1008 17:34:26.976013  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.976114  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.976622  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.976709  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.976892  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.976909  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.976936  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.977027  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.977085  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.977524  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I1008 17:34:26.977531  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.977554  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1008 17:34:26.978070  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.978243  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.978257  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.978980  537626 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1008 17:34:26.979021  537626 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1008 17:34:26.979036  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.979052  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.979082  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.979449  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.979499  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.979533  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.980017  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1008 17:34:26.980025  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:26.980057  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:26.980162  537626 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 17:34:26.980175  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1008 17:34:26.980193  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.980533  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1008 17:34:26.980554  537626 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1008 17:34:26.980567  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.982185  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1008 17:34:26.983244  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1008 17:34:26.983826  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.983847  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I1008 17:34:26.984222  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:26.984532  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.984555  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.984568  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.984638  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:26.984654  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:26.984741  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.984922  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.985095  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.985132  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.985151  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.985097  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:26.985200  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1008 17:34:26.985306  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.985599  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:26.985606  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.985775  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.985951  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.986051  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.987294  537626 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1008 17:34:26.987404  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:26.988257  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1008 17:34:26.988276  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1008 17:34:26.988294  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.989042  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 17:34:26.990243  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1008 17:34:26.991190  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.991631  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.991667  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.991829  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.992001  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.992125  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.992231  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:26.992521  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 17:34:26.993913  537626 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 17:34:26.993937  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1008 17:34:26.993954  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:26.997179  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.997681  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:26.997701  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:26.997890  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:26.998059  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:26.998182  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:26.998335  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.000017  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I1008 17:34:27.000544  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.001144  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.001163  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.001442  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.001662  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.002966  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39145
	I1008 17:34:27.003151  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I1008 17:34:27.003338  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.003424  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.003519  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.003888  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.003915  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.004338  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.004468  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.004505  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.004914  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:27.004962  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:27.004985  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.005163  537626 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1008 17:34:27.006443  537626 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1008 17:34:27.006461  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1008 17:34:27.006479  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.006529  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.006712  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I1008 17:34:27.007878  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.008747  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.008763  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.008789  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.009569  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.009660  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.009884  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.010013  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.010101  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.010183  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.010383  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.010386  537626 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1008 17:34:27.010507  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.010597  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.012224  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.012440  537626 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 17:34:27.012458  537626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 17:34:27.012477  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.012749  537626 out.go:177]   - Using image docker.io/registry:2.8.3
	I1008 17:34:27.014480  537626 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1008 17:34:27.014502  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1008 17:34:27.014516  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.016259  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.016823  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.016844  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.017001  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.017166  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.017256  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.017343  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.018260  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	W1008 17:34:27.018387  537626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51170->192.168.39.48:22: read: connection reset by peer
	I1008 17:34:27.018413  537626 retry.go:31] will retry after 153.104938ms: ssh: handshake failed: read tcp 192.168.39.1:51170->192.168.39.48:22: read: connection reset by peer
	I1008 17:34:27.018719  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.018732  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.018942  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.019084  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.019203  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.019488  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.024210  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35987
	I1008 17:34:27.024622  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:27.025188  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:27.025203  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:27.025538  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:27.025714  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:27.027141  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:27.028943  537626 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1008 17:34:27.030072  537626 out.go:177]   - Using image docker.io/busybox:stable
	I1008 17:34:27.031093  537626 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 17:34:27.031110  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1008 17:34:27.031124  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:27.033661  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.033959  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:27.033983  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:27.034108  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:27.034299  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:27.034452  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:27.034599  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:27.300930  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1008 17:34:27.300963  537626 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1008 17:34:27.331935  537626 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1008 17:34:27.331963  537626 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1008 17:34:27.366200  537626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:34:27.366201  537626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 17:34:27.402587  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:34:27.403408  537626 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1008 17:34:27.403429  537626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1008 17:34:27.404696  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 17:34:27.445572  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1008 17:34:27.445611  537626 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1008 17:34:27.449416  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1008 17:34:27.449448  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1008 17:34:27.497928  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 17:34:27.500719  537626 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1008 17:34:27.500749  537626 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1008 17:34:27.528545  537626 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1008 17:34:27.528583  537626 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1008 17:34:27.532304  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 17:34:27.532330  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1008 17:34:27.535505  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 17:34:27.553335  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 17:34:27.608094  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 17:34:27.634109  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1008 17:34:27.683517  537626 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1008 17:34:27.683544  537626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1008 17:34:27.708868  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1008 17:34:27.708896  537626 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1008 17:34:27.725020  537626 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1008 17:34:27.725046  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1008 17:34:27.727572  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 17:34:27.727592  537626 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 17:34:27.739662  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1008 17:34:27.739688  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1008 17:34:27.749414  537626 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1008 17:34:27.749437  537626 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1008 17:34:27.809410  537626 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1008 17:34:27.809446  537626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1008 17:34:27.897639  537626 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 17:34:27.897681  537626 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 17:34:27.916094  537626 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1008 17:34:27.916131  537626 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1008 17:34:27.918602  537626 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1008 17:34:27.918623  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1008 17:34:27.935476  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1008 17:34:27.935502  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1008 17:34:27.949759  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1008 17:34:27.949784  537626 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1008 17:34:27.958909  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1008 17:34:28.014400  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 17:34:28.068948  537626 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 17:34:28.068973  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1008 17:34:28.087378  537626 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1008 17:34:28.087413  537626 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1008 17:34:28.096141  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1008 17:34:28.100088  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1008 17:34:28.100113  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1008 17:34:28.213586  537626 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1008 17:34:28.213620  537626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1008 17:34:28.236357  537626 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1008 17:34:28.236381  537626 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1008 17:34:28.273339  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 17:34:28.530307  537626 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1008 17:34:28.530454  537626 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1008 17:34:28.592041  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1008 17:34:28.592072  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1008 17:34:28.754771  537626 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1008 17:34:28.754797  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1008 17:34:28.872490  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1008 17:34:28.911349  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1008 17:34:28.911383  537626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1008 17:34:29.170648  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1008 17:34:29.170676  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1008 17:34:29.237263  537626 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.870968074s)
	I1008 17:34:29.237309  537626 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 17:34:29.237321  537626 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.871076931s)
	I1008 17:34:29.238125  537626 node_ready.go:35] waiting up to 6m0s for node "addons-738106" to be "Ready" ...
	I1008 17:34:29.244216  537626 node_ready.go:49] node "addons-738106" has status "Ready":"True"
	I1008 17:34:29.244245  537626 node_ready.go:38] duration metric: took 6.095882ms for node "addons-738106" to be "Ready" ...
	I1008 17:34:29.244256  537626 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:34:29.258109  537626 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:29.491930  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1008 17:34:29.491958  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1008 17:34:29.743530  537626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-738106" context rescaled to 1 replicas
	I1008 17:34:29.752026  537626 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 17:34:29.752061  537626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1008 17:34:30.149529  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 17:34:31.176702  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.774067143s)
	I1008 17:34:31.176767  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:31.176789  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:31.177129  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:31.177147  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:31.177152  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:31.177178  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:31.177189  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:31.177505  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:31.177529  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:31.177541  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:31.265910  537626 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:33.269016  537626 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:33.993850  537626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1008 17:34:33.993899  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:33.997516  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:33.997970  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:33.997998  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:33.998213  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:33.998471  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:33.998626  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:33.998767  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:34.475290  537626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1008 17:34:34.598232  537626 addons.go:234] Setting addon gcp-auth=true in "addons-738106"
	I1008 17:34:34.598301  537626 host.go:66] Checking if "addons-738106" exists ...
	I1008 17:34:34.598685  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:34.598755  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:34.614277  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I1008 17:34:34.614729  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:34.615205  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:34.615230  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:34.615556  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:34.616191  537626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:34:34.616257  537626 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:34:34.632271  537626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I1008 17:34:34.632779  537626 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:34:34.633348  537626 main.go:141] libmachine: Using API Version  1
	I1008 17:34:34.633381  537626 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:34:34.633781  537626 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:34:34.634014  537626 main.go:141] libmachine: (addons-738106) Calling .GetState
	I1008 17:34:34.635682  537626 main.go:141] libmachine: (addons-738106) Calling .DriverName
	I1008 17:34:34.635919  537626 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1008 17:34:34.635946  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHHostname
	I1008 17:34:34.638878  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:34.639253  537626 main.go:141] libmachine: (addons-738106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:47:63", ip: ""} in network mk-addons-738106: {Iface:virbr1 ExpiryTime:2024-10-08 18:33:56 +0000 UTC Type:0 Mac:52:54:00:4c:47:63 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:addons-738106 Clientid:01:52:54:00:4c:47:63}
	I1008 17:34:34.639280  537626 main.go:141] libmachine: (addons-738106) DBG | domain addons-738106 has defined IP address 192.168.39.48 and MAC address 52:54:00:4c:47:63 in network mk-addons-738106
	I1008 17:34:34.639468  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHPort
	I1008 17:34:34.639648  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHKeyPath
	I1008 17:34:34.639844  537626 main.go:141] libmachine: (addons-738106) Calling .GetSSHUsername
	I1008 17:34:34.640029  537626 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/addons-738106/id_rsa Username:docker}
	I1008 17:34:34.914430  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.509702088s)
	I1008 17:34:34.914490  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914502  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914505  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.416534256s)
	I1008 17:34:34.914554  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914559  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.379027448s)
	I1008 17:34:34.914573  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914598  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914615  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914604  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.361244559s)
	I1008 17:34:34.914634  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.306516511s)
	I1008 17:34:34.914674  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.280529257s)
	I1008 17:34:34.914683  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914690  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914693  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914707  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914708  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914741  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.955804509s)
	I1008 17:34:34.914747  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914756  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914765  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914868  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.900429137s)
	I1008 17:34:34.914891  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.914901  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.914993  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.818822946s)
	I1008 17:34:34.915010  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915023  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915160  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.641788358s)
	W1008 17:34:34.915192  537626 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 17:34:34.915222  537626 retry.go:31] will retry after 287.200789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 17:34:34.915317  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.042794512s)
	I1008 17:34:34.915344  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915354  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915392  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.915404  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.915413  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915421  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915475  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.915508  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.915516  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.915525  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915531  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.915669  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.915698  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.915704  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.915711  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.915718  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916083  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916111  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916118  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916124  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916130  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916379  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916415  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916429  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916451  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916456  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916462  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916468  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916511  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916517  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916523  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916528  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916566  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916572  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916578  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.916584  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.916666  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.916713  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.916719  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.916728  537626 addons.go:475] Verifying addon registry=true in "addons-738106"
	I1008 17:34:34.918587  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.918614  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.918620  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.918628  537626 addons.go:475] Verifying addon ingress=true in "addons-738106"
	I1008 17:34:34.918750  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.918758  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.918766  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.918772  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.918816  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.918841  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.918847  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.918853  537626 addons.go:475] Verifying addon metrics-server=true in "addons-738106"
	I1008 17:34:34.919080  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919119  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.919126  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919502  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919529  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921226  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919584  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919609  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919618  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921291  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919622  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919638  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921329  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921343  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.921354  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.919641  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921379  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.919661  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919676  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.919692  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921472  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921480  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.921487  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.919711  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921528  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921626  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:34.921659  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921666  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921071  537626 out.go:177] * Verifying ingress addon...
	I1008 17:34:34.921850  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.921868  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:34.921090  537626 out.go:177] * Verifying registry addon...
	I1008 17:34:34.923202  537626 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-738106 service yakd-dashboard -n yakd-dashboard
	
	I1008 17:34:34.924168  537626 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1008 17:34:34.924168  537626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1008 17:34:34.948329  537626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 17:34:34.948356  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:34.948494  537626 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1008 17:34:34.948508  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:34.979980  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.980004  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.980326  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.980341  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	W1008 17:34:34.980450  537626 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1008 17:34:34.982846  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:34.982862  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:34.983132  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:34.983148  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:35.202679  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 17:34:35.461358  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:35.462051  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:35.598228  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.448639113s)
	I1008 17:34:35.598291  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:35.598307  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:35.598624  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:35.598645  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:35.598655  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:35.598664  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:35.599019  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:35.599054  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:35.599072  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:35.599091  537626 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-738106"
	I1008 17:34:35.599696  537626 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1008 17:34:35.600395  537626 out.go:177] * Verifying csi-hostpath-driver addon...
	I1008 17:34:35.601634  537626 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1008 17:34:35.602270  537626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1008 17:34:35.602718  537626 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1008 17:34:35.602737  537626 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1008 17:34:35.627382  537626 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 17:34:35.627406  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:35.717071  537626 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1008 17:34:35.717105  537626 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1008 17:34:35.771634  537626 pod_ready.go:103] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:35.828934  537626 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 17:34:35.828963  537626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1008 17:34:35.866051  537626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 17:34:35.929217  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:35.929730  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:36.109938  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:36.428529  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:36.428869  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:36.607438  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:36.947918  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:36.948171  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:37.109527  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:37.383182  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.180422305s)
	I1008 17:34:37.383210  537626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.517121439s)
	I1008 17:34:37.383243  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.383260  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.383260  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.383276  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.383603  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:37.383613  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.383627  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.383637  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.383644  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.383881  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.383898  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.385502  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.385555  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.385578  537626 main.go:141] libmachine: Making call to close driver server
	I1008 17:34:37.385597  537626 main.go:141] libmachine: (addons-738106) Calling .Close
	I1008 17:34:37.385823  537626 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:34:37.385842  537626 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:34:37.385876  537626 main.go:141] libmachine: (addons-738106) DBG | Closing plugin on server side
	I1008 17:34:37.387125  537626 addons.go:475] Verifying addon gcp-auth=true in "addons-738106"
	I1008 17:34:37.388872  537626 out.go:177] * Verifying gcp-auth addon...
	I1008 17:34:37.390870  537626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1008 17:34:37.408047  537626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 17:34:37.408066  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:37.441365  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:37.442049  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:37.607422  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:37.769557  537626 pod_ready.go:93] pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:37.769588  537626 pod_ready.go:82] duration metric: took 8.511452172s for pod "coredns-7c65d6cfc9-4zs69" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:37.769600  537626 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:37.897291  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:37.928732  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:37.929204  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:38.111633  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:38.395143  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:38.428627  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:38.429183  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:38.607682  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:38.896034  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:38.928630  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:38.928898  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:39.111114  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:39.296621  537626 pod_ready.go:98] pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.48 HostIPs:[{IP:192.168.39.
48}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-08 17:34:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-08 17:34:32 +0000 UTC,FinishedAt:2024-10-08 17:34:37 +0000 UTC,ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e Started:0xc00294b440 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029058e0} {Name:kube-api-access-2mxkw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0029058f0}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1008 17:34:39.296656  537626 pod_ready.go:82] duration metric: took 1.527048083s for pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace to be "Ready" ...
	E1008 17:34:39.296672  537626 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-bk9x7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 17:34:26 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.48 HostIPs:[{IP:192.168.39.48}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-08 17:34:26 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-08 17:34:32 +0000 UTC,FinishedAt:2024-10-08 17:34:37 +0000 UTC,ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://825e3102ea09b8c089481a9725e787a9e3e8254a731520fc1701c427670ef44e Started:0xc00294b440 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029058e0} {Name:kube-api-access-2mxkw MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc0029058f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1008 17:34:39.296692  537626 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.331936  537626 pod_ready.go:93] pod "etcd-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.331962  537626 pod_ready.go:82] duration metric: took 35.25898ms for pod "etcd-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.331983  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.338964  537626 pod_ready.go:93] pod "kube-apiserver-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.338986  537626 pod_ready.go:82] duration metric: took 6.993302ms for pod "kube-apiserver-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.338997  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.346652  537626 pod_ready.go:93] pod "kube-controller-manager-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.346672  537626 pod_ready.go:82] duration metric: took 7.66745ms for pod "kube-controller-manager-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.346684  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7clnt" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.361811  537626 pod_ready.go:93] pod "kube-proxy-7clnt" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.361834  537626 pod_ready.go:82] duration metric: took 15.142018ms for pod "kube-proxy-7clnt" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.361844  537626 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.411880  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:39.433069  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:39.434810  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:39.607399  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:39.763599  537626 pod_ready.go:93] pod "kube-scheduler-addons-738106" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:39.763631  537626 pod_ready.go:82] duration metric: took 401.7777ms for pod "kube-scheduler-addons-738106" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.763646  537626 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:39.894381  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:39.934736  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:39.935241  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:40.108178  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:40.402357  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:40.429131  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:40.431577  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:40.607247  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:40.895778  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:40.928419  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:40.930147  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:41.116312  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:41.542501  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:41.542744  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:41.544763  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:41.606612  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:41.769394  537626 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:41.895309  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:41.928518  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:41.929279  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:42.107156  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:42.394536  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:42.429931  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:42.430674  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:42.609645  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:42.894946  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:42.928811  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:42.929115  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:43.106403  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:43.395784  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:43.429122  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:43.430345  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:43.608653  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:43.770557  537626 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace has status "Ready":"False"
	I1008 17:34:43.894914  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:43.931082  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:43.931841  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:44.107117  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:44.394760  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:44.427980  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:44.428696  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:44.607591  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:44.895791  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:44.928407  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:44.928654  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:45.106683  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:45.396219  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:45.429182  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:45.429897  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:45.608666  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:45.770001  537626 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace has status "Ready":"True"
	I1008 17:34:45.770026  537626 pod_ready.go:82] duration metric: took 6.006371678s for pod "nvidia-device-plugin-daemonset-dz2k9" in "kube-system" namespace to be "Ready" ...
	I1008 17:34:45.770035  537626 pod_ready.go:39] duration metric: took 16.525763483s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:34:45.770051  537626 api_server.go:52] waiting for apiserver process to appear ...
	I1008 17:34:45.770103  537626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 17:34:45.787325  537626 api_server.go:72] duration metric: took 18.937121492s to wait for apiserver process to appear ...
	I1008 17:34:45.787354  537626 api_server.go:88] waiting for apiserver healthz status ...
	I1008 17:34:45.787377  537626 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I1008 17:34:45.792397  537626 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I1008 17:34:45.793228  537626 api_server.go:141] control plane version: v1.31.1
	I1008 17:34:45.793249  537626 api_server.go:131] duration metric: took 5.888645ms to wait for apiserver health ...
	I1008 17:34:45.793257  537626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 17:34:45.800410  537626 system_pods.go:59] 17 kube-system pods found
	I1008 17:34:45.800435  537626 system_pods.go:61] "coredns-7c65d6cfc9-4zs69" [a555f46c-9cef-4b78-a31f-6ad3cd88c338] Running
	I1008 17:34:45.800443  537626 system_pods.go:61] "csi-hostpath-attacher-0" [db6e092c-da8c-46ea-8e60-b2c9a91b4497] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 17:34:45.800451  537626 system_pods.go:61] "csi-hostpath-resizer-0" [70c956d8-0d97-477b-a407-7e74b8d53685] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 17:34:45.800460  537626 system_pods.go:61] "csi-hostpathplugin-r4djc" [64366d61-0edb-46a5-8813-2d30575552a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 17:34:45.800468  537626 system_pods.go:61] "etcd-addons-738106" [72430698-e927-4d04-8392-0bfc6eb98c60] Running
	I1008 17:34:45.800473  537626 system_pods.go:61] "kube-apiserver-addons-738106" [3af39427-8de7-4cf3-93c5-783349179428] Running
	I1008 17:34:45.800477  537626 system_pods.go:61] "kube-controller-manager-addons-738106" [660c3a28-4781-4e08-a328-9d59d85d6245] Running
	I1008 17:34:45.800482  537626 system_pods.go:61] "kube-ingress-dns-minikube" [2ed789e2-91c6-459c-8366-72e74bc03132] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 17:34:45.800488  537626 system_pods.go:61] "kube-proxy-7clnt" [e9720997-cb8e-4870-8f6b-9b3bc1a30218] Running
	I1008 17:34:45.800492  537626 system_pods.go:61] "kube-scheduler-addons-738106" [45b2c7a7-8c10-4894-bad7-5af6f70a4b83] Running
	I1008 17:34:45.800497  537626 system_pods.go:61] "metrics-server-84c5f94fbc-w72vc" [01f00ce3-494b-4d47-ab30-2439d417f6b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 17:34:45.800500  537626 system_pods.go:61] "nvidia-device-plugin-daemonset-dz2k9" [42202b26-4c49-44bb-836f-cfcd7b7a3a5f] Running
	I1008 17:34:45.800506  537626 system_pods.go:61] "registry-66c9cd494c-wsg7d" [1e47d1a8-5e9a-4214-9302-306efa48abeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 17:34:45.800511  537626 system_pods.go:61] "registry-proxy-6hj56" [0c50d7bc-8a1f-4eb6-a83a-d29fda2e2722] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 17:34:45.800518  537626 system_pods.go:61] "snapshot-controller-56fcc65765-4rtbq" [fe86a2d5-d3af-4ca8-8c16-4a43b4d10a1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.800526  537626 system_pods.go:61] "snapshot-controller-56fcc65765-6bdsg" [e24c8dfd-265c-4e3a-82c3-41ce76e322f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.800554  537626 system_pods.go:61] "storage-provisioner" [1b01ab9a-1013-49d5-9c61-88a751457598] Running
	I1008 17:34:45.800563  537626 system_pods.go:74] duration metric: took 7.299999ms to wait for pod list to return data ...
	I1008 17:34:45.800569  537626 default_sa.go:34] waiting for default service account to be created ...
	I1008 17:34:45.802607  537626 default_sa.go:45] found service account: "default"
	I1008 17:34:45.802622  537626 default_sa.go:55] duration metric: took 2.048023ms for default service account to be created ...
	I1008 17:34:45.802628  537626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 17:34:45.811153  537626 system_pods.go:86] 17 kube-system pods found
	I1008 17:34:45.811175  537626 system_pods.go:89] "coredns-7c65d6cfc9-4zs69" [a555f46c-9cef-4b78-a31f-6ad3cd88c338] Running
	I1008 17:34:45.811182  537626 system_pods.go:89] "csi-hostpath-attacher-0" [db6e092c-da8c-46ea-8e60-b2c9a91b4497] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 17:34:45.811190  537626 system_pods.go:89] "csi-hostpath-resizer-0" [70c956d8-0d97-477b-a407-7e74b8d53685] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 17:34:45.811197  537626 system_pods.go:89] "csi-hostpathplugin-r4djc" [64366d61-0edb-46a5-8813-2d30575552a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 17:34:45.811202  537626 system_pods.go:89] "etcd-addons-738106" [72430698-e927-4d04-8392-0bfc6eb98c60] Running
	I1008 17:34:45.811206  537626 system_pods.go:89] "kube-apiserver-addons-738106" [3af39427-8de7-4cf3-93c5-783349179428] Running
	I1008 17:34:45.811210  537626 system_pods.go:89] "kube-controller-manager-addons-738106" [660c3a28-4781-4e08-a328-9d59d85d6245] Running
	I1008 17:34:45.811215  537626 system_pods.go:89] "kube-ingress-dns-minikube" [2ed789e2-91c6-459c-8366-72e74bc03132] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 17:34:45.811219  537626 system_pods.go:89] "kube-proxy-7clnt" [e9720997-cb8e-4870-8f6b-9b3bc1a30218] Running
	I1008 17:34:45.811222  537626 system_pods.go:89] "kube-scheduler-addons-738106" [45b2c7a7-8c10-4894-bad7-5af6f70a4b83] Running
	I1008 17:34:45.811226  537626 system_pods.go:89] "metrics-server-84c5f94fbc-w72vc" [01f00ce3-494b-4d47-ab30-2439d417f6b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 17:34:45.811231  537626 system_pods.go:89] "nvidia-device-plugin-daemonset-dz2k9" [42202b26-4c49-44bb-836f-cfcd7b7a3a5f] Running
	I1008 17:34:45.811236  537626 system_pods.go:89] "registry-66c9cd494c-wsg7d" [1e47d1a8-5e9a-4214-9302-306efa48abeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 17:34:45.811241  537626 system_pods.go:89] "registry-proxy-6hj56" [0c50d7bc-8a1f-4eb6-a83a-d29fda2e2722] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 17:34:45.811246  537626 system_pods.go:89] "snapshot-controller-56fcc65765-4rtbq" [fe86a2d5-d3af-4ca8-8c16-4a43b4d10a1b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.811255  537626 system_pods.go:89] "snapshot-controller-56fcc65765-6bdsg" [e24c8dfd-265c-4e3a-82c3-41ce76e322f3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 17:34:45.811259  537626 system_pods.go:89] "storage-provisioner" [1b01ab9a-1013-49d5-9c61-88a751457598] Running
	I1008 17:34:45.811265  537626 system_pods.go:126] duration metric: took 8.632263ms to wait for k8s-apps to be running ...
	I1008 17:34:45.811272  537626 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 17:34:45.811316  537626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:34:45.833679  537626 system_svc.go:56] duration metric: took 22.401969ms WaitForService to wait for kubelet
	I1008 17:34:45.833703  537626 kubeadm.go:582] duration metric: took 18.983505627s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:34:45.833721  537626 node_conditions.go:102] verifying NodePressure condition ...
	I1008 17:34:45.836687  537626 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:34:45.836708  537626 node_conditions.go:123] node cpu capacity is 2
	I1008 17:34:45.836720  537626 node_conditions.go:105] duration metric: took 2.982947ms to run NodePressure ...
	I1008 17:34:45.836731  537626 start.go:241] waiting for startup goroutines ...
	I1008 17:34:45.893899  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:45.928576  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:45.928717  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:46.107866  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:46.396044  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:46.430721  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:46.431131  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:46.607503  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:46.893786  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:46.928861  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:46.929340  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:47.107289  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:47.395458  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:47.428751  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:47.431521  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:47.606676  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:47.894081  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:47.929040  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:47.929305  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:48.107200  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:48.395201  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:48.429769  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:48.430241  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:48.607015  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:48.895132  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:48.932593  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:48.932897  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:49.107570  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:49.423087  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:49.429895  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:49.430388  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:49.607072  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:49.894699  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:49.928425  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:49.928826  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:50.107664  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:50.396263  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:50.428395  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:50.429759  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:50.608434  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:50.894630  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:50.928162  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:50.928452  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:51.107102  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:51.395923  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:51.432031  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:51.432067  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:51.607269  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:51.894993  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:51.929168  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:51.930627  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:52.110183  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:52.397037  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:52.429571  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:52.430013  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:52.607980  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:52.896411  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:52.930334  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:52.930485  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:53.107450  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:53.396230  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:53.429182  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:53.429851  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:53.607034  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:53.895219  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:53.928832  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:53.929099  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:54.106713  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:54.396916  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:54.428764  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:54.429122  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:54.606480  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:54.895116  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:54.928193  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:54.929752  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:55.107590  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:55.395356  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:55.435392  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:55.435865  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:55.609495  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:55.895374  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:55.929546  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:55.929841  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:56.109026  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:56.396668  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:56.429858  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:56.429872  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:56.606777  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:56.894379  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:56.929854  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:56.931344  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:57.108899  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:57.396943  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:57.429196  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:57.429843  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:57.611082  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:57.895285  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:57.929224  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:57.930728  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:58.106897  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:58.398715  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:58.429119  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:58.429559  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:58.610121  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:58.894876  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:58.928814  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:58.928942  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:59.106812  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:59.394294  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:59.428751  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:59.429797  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:34:59.607191  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:34:59.895251  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:34:59.928023  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:34:59.929669  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:00.107133  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:00.415367  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:00.432023  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:00.438181  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:00.609351  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:00.895030  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:00.931923  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:00.932188  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:01.112212  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:01.394435  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:01.431143  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:01.442512  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:01.606829  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:01.894718  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:01.928597  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:01.929923  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:02.107880  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:02.394030  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:02.430499  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:02.430775  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:02.607808  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:03.226700  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:03.226887  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:03.227352  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:03.227612  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:03.393893  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:03.428791  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:03.429121  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:03.607113  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:03.895505  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:03.928261  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:03.928285  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:04.106908  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:04.394750  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:04.429620  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:04.429835  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:04.609009  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:04.894617  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:04.928745  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:04.929291  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:05.107697  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:05.524427  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:05.524768  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:05.525001  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:05.606802  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:05.894780  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:05.928589  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:05.928995  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:06.108743  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:06.394038  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:06.428384  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:06.429511  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:06.606886  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:06.894738  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:06.930164  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:06.930506  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:07.107379  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:07.394530  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:07.428331  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:07.429641  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:07.607726  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:07.895254  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:07.929396  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:07.929955  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:08.106521  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:08.394147  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:08.428836  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:08.429184  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:08.607296  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:08.895839  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:08.998831  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:09.000108  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:09.107674  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:09.394525  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:09.429351  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:09.430242  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:09.607364  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:09.896784  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:09.930013  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:09.931125  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 17:35:10.107233  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:10.396576  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:10.428298  537626 kapi.go:107] duration metric: took 35.504126288s to wait for kubernetes.io/minikube-addons=registry ...
	I1008 17:35:10.430374  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:10.606765  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:10.894611  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:10.927885  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:11.107475  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:11.394708  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:11.428795  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:11.607132  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:11.895535  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:12.310809  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:12.314395  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:12.406237  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:12.428465  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:12.607277  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:12.894975  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:12.929553  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:13.108571  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:13.406410  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:13.430344  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:13.607876  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:13.895998  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:13.934904  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:14.106201  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:14.395667  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:14.429201  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:14.608022  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:14.894989  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:14.928941  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:15.110470  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:15.396682  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:15.429396  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:15.607015  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:15.894520  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:15.929194  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:16.106711  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:16.404729  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:16.428152  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:16.607122  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:16.895279  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:16.996650  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:17.106549  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:17.395611  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:17.427975  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:17.606909  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:17.894372  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:17.929056  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:18.106425  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:18.398309  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:18.471451  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:18.607243  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:18.894766  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:18.928301  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:19.107679  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:19.394480  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:19.429130  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:19.607736  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:19.894139  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:19.929298  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:20.106962  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:20.402356  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:20.428990  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:20.606490  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:20.894966  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:20.928733  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:21.107085  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:21.395208  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:21.429853  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:21.607240  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:21.894394  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:21.929341  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:22.110156  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:22.405596  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:22.504403  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:22.607787  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:22.894552  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:22.928307  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:23.106393  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:23.394661  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:23.428310  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:23.606631  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:23.895251  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:23.930279  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:24.106838  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:24.394778  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:24.428843  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:24.607105  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:24.895278  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:24.929200  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:25.107136  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:25.402566  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:25.435660  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:25.608483  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:25.894491  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:25.928921  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:26.107934  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:26.394774  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:26.428272  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:26.607080  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:26.894487  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:26.929792  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:27.107393  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:27.396657  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:27.427969  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:27.607120  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:27.894645  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:27.928219  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:28.107548  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:28.396950  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:28.427742  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:28.608002  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:28.894097  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:28.929147  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:29.107348  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:29.394005  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:29.428200  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:29.607134  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:29.895065  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:29.928192  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:30.107287  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 17:35:30.398431  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:30.429883  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:30.606733  537626 kapi.go:107] duration metric: took 55.004459155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1008 17:35:30.895274  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:30.929047  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:31.394719  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:31.427922  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:31.894711  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:31.928327  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:32.394854  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:32.428132  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:32.896008  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:32.928293  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:33.394716  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:33.429206  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:33.893999  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:33.928975  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:34.396538  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:34.427728  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:34.896333  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:34.929069  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:35.395514  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:35.429284  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:35.895223  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:35.929205  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:36.397319  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:36.428544  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:36.894587  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:36.928004  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:37.395517  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:37.433811  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:37.895047  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:37.929982  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:38.396635  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:38.428244  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:38.894670  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:38.928115  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:39.394212  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:39.429293  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:39.894046  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:39.928470  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:40.395482  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:40.496836  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:40.902607  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:40.930108  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:41.394507  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:41.428638  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:41.894645  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:41.928493  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:42.403178  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:42.496852  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:42.896654  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:42.928626  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:43.394350  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:43.429485  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:43.894736  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:43.930490  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:44.394664  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:44.428155  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:44.894611  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:44.928143  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:45.459821  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:45.460202  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:45.895045  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:45.928552  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:46.397064  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:46.428483  537626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 17:35:46.894912  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:46.928330  537626 kapi.go:107] duration metric: took 1m12.004158927s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1008 17:35:47.394200  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:47.894693  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:48.394945  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:48.895361  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:49.394897  537626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 17:35:49.899133  537626 kapi.go:107] duration metric: took 1m12.508257183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1008 17:35:49.900781  537626 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-738106 cluster.
	I1008 17:35:49.902155  537626 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1008 17:35:49.903459  537626 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1008 17:35:49.904727  537626 out.go:177] * Enabled addons: storage-provisioner, metrics-server, cloud-spanner, nvidia-device-plugin, inspektor-gadget, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1008 17:35:49.906519  537626 addons.go:510] duration metric: took 1m23.056407589s for enable addons: enabled=[storage-provisioner metrics-server cloud-spanner nvidia-device-plugin inspektor-gadget ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1008 17:35:49.906567  537626 start.go:246] waiting for cluster config update ...
	I1008 17:35:49.906588  537626 start.go:255] writing updated cluster config ...
	I1008 17:35:49.907193  537626 ssh_runner.go:195] Run: rm -f paused
	I1008 17:35:49.960396  537626 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 17:35:49.961635  537626 out.go:177] * Done! kubectl is now configured to use "addons-738106" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.579518411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409814579492025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c6f2d35-77eb-436e-8fac-35acad526b12 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.580089237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6febdade-7317-4800-b13d-24efbfc69a7b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.580145435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6febdade-7317-4800-b13d-24efbfc69a7b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.580552093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5e5cea54617aa678628ba4b709b14fea3020a30e1b0bd5feeb93b8fbc1a47ff,PodSandboxId:1f0dbc5a09c5a060dfbd48176c61990d01d75620471858f1259b61e63dc6400e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728409608558239516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f448f11-f3a9-40df-9e9d-182a9b287c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 54229a0d-9b3f-
4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1de,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172840889995061
9994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f561
7342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596
bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e
39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4
b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6febdade-7317-4800-b13d-24efbfc69a7b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.618139821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d424d451-1721-43aa-942f-2764332fd822 name=/runtime.v1.RuntimeService/Version
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.618211627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d424d451-1721-43aa-942f-2764332fd822 name=/runtime.v1.RuntimeService/Version
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.619570471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d15ab7dc-4df4-4756-af08-ed2fe444b8b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.620731730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409814620704984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d15ab7dc-4df4-4756-af08-ed2fe444b8b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.621349775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f35a5d6-e6c2-452a-aab5-780b1aa6cb2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.621404769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f35a5d6-e6c2-452a-aab5-780b1aa6cb2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.621828616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5e5cea54617aa678628ba4b709b14fea3020a30e1b0bd5feeb93b8fbc1a47ff,PodSandboxId:1f0dbc5a09c5a060dfbd48176c61990d01d75620471858f1259b61e63dc6400e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728409608558239516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f448f11-f3a9-40df-9e9d-182a9b287c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 54229a0d-9b3f-
4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1de,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172840889995061
9994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f561
7342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596
bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e
39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4
b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f35a5d6-e6c2-452a-aab5-780b1aa6cb2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.657765655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf4a8d1a-3b57-42bb-a63a-d5b3fef4c07a name=/runtime.v1.RuntimeService/Version
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.658192102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf4a8d1a-3b57-42bb-a63a-d5b3fef4c07a name=/runtime.v1.RuntimeService/Version
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.664165669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=623d5827-0f1e-4c13-9877-70eb3c4fa9a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.665639145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409814665617974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=623d5827-0f1e-4c13-9877-70eb3c4fa9a7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.666678414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2952da00-15f6-40bd-888b-5b0900d86550 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.666847106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2952da00-15f6-40bd-888b-5b0900d86550 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.667449813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5e5cea54617aa678628ba4b709b14fea3020a30e1b0bd5feeb93b8fbc1a47ff,PodSandboxId:1f0dbc5a09c5a060dfbd48176c61990d01d75620471858f1259b61e63dc6400e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728409608558239516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f448f11-f3a9-40df-9e9d-182a9b287c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 54229a0d-9b3f-
4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1de,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172840889995061
9994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f561
7342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596
bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e
39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4
b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2952da00-15f6-40bd-888b-5b0900d86550 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.700457366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26b4e784-a63a-4510-9104-e8adc0a49d2b name=/runtime.v1.RuntimeService/Version
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.700544041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26b4e784-a63a-4510-9104-e8adc0a49d2b name=/runtime.v1.RuntimeService/Version
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.701641129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70b13b59-dbce-4c6a-84a2-d80fa7f9a43b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.702831953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409814702807420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70b13b59-dbce-4c6a-84a2-d80fa7f9a43b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.703634934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a97a32ed-77a8-42e4-a734-928263726526 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.703694872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a97a32ed-77a8-42e4-a734-928263726526 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 17:50:14 addons-738106 crio[663]: time="2024-10-08 17:50:14.704058310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5e5cea54617aa678628ba4b709b14fea3020a30e1b0bd5feeb93b8fbc1a47ff,PodSandboxId:1f0dbc5a09c5a060dfbd48176c61990d01d75620471858f1259b61e63dc6400e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728409608558239516,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f448f11-f3a9-40df-9e9d-182a9b287c9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6eda8e92df21603c8837a5157ff81d7d00b2672886a6f12da482b02b7fa7b1,PodSandboxId:8033a1381ebd3e149fa1442313d7362978fed3e41787d803248c15477267eaa7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728409606089164877,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hkkxb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66e7d107-14b4-456b-b417-ad6c6f92477a,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1ca0c06c8398c562cd9e48ba589fe0665c13589be6b99c8d2e2eca1fab43b3,PodSandboxId:3f6b2acb99d737c14a2c7cf6431a9041d991b8e73c3d440f13b630fe81fd5367,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728409467568817343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71eea7b8-ba54-449e-98bf-d99695b23e27,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08160c6eb321a1b60936552ed4162e0e8dc4321e9f2423161983f064c733e96b,PodSandboxId:de4daf79167d628b1b9d3ba58bc1ee626bfc73e756bb737b84a313ff2033a8e7,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1728409458222887251,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-tn9fh,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 54229a0d-9b3f-
4514-9ca0-4cb2050631c8,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5c0b2131351ad3af4c08784d60df8a4ecf652dd01c4c01b7a524579deb062d,PodSandboxId:9d8268bb8e46de870a941239e0bcaaed5d46032e11d94e2042976d3bc9e2fc2d,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728408905617242431,Labels:map[string]string{io.kubernetes.container.name: local-
path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-xzzz5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d81a508-cd2a-4780-b662-94e192857d97,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddaa32bde363154b5e69c25970f8f1639d532746bed8e6c318363c30a1e1de,PodSandboxId:9d92fd6a40c0aa92679375ccdffb4693815ae930b931671981d684e50cf41d5c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:172840889995061
9994,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-w72vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f00ce3-494b-4d47-ab30-2439d417f6b6,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c,PodSandboxId:bedc5f99e2d239e117e4140185d55ad65ad36439a333811e7935a6a9be39f485,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f561
7342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728408872505698533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b01ab9a-1013-49d5-9c61-88a751457598,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4,PodSandboxId:38da5754ac9d2cb28eab68b8a7a09b414733b7f8071a18c855a8b5e184f75f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596
bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728408870388548642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4zs69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a555f46c-9cef-4b78-a31f-6ad3cd88c338,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7,PodSandboxId:d6e44ed8aed0469627093319d3e683e93d7d09498f36ac5db629772c621501e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728408866770573117,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7clnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9720997-cb8e-4870-8f6b-9b3bc1a30218,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528,PodSandboxId:9d01b0820052ac1cb960d6667017ed6b6df6789d10ee7c9a6f90e582a8f6bbd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e
39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728408856177016111,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4292f94dcd95b03e27bd35577ced695,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607,PodSandboxId:6fd03453e7f22ca30b319805b61e2912010bd2b7e68d642cad3d558dde6166f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b5526
0db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728408856186194658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d765bbb797604077559076504bfe5fbc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f,PodSandboxId:0be3e9d8cb0a9e9dc7947f85bb47665e5bdc6bc88ac795868eda6331fa40db97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4
b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728408856187301617,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2777b032cd7789b5ac8a21048e1a6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671,PodSandboxId:e7f5dc9fe6f85906e24f83d3e0c61f20c5a9bdd9c9c8c7d2a682c67029119396,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728408856191405722,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-738106,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5ba9c3e925e462f2f12fbbb8e848484,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a97a32ed-77a8-42e4-a734-928263726526 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5e5cea54617a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   1f0dbc5a09c5a       busybox
	4a6eda8e92df2       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   8033a1381ebd3       hello-world-app-55bf9c44b4-hkkxb
	7a1ca0c06c839       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   3f6b2acb99d73       nginx
	08160c6eb321a       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                   5 minutes ago       Running             headlamp                  0                   de4daf79167d6       headlamp-7b5c95b59d-tn9fh
	1a5c0b2131351       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        15 minutes ago      Running             local-path-provisioner    0                   9d8268bb8e46d       local-path-provisioner-86d989889c-xzzz5
	60ddaa32bde36       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   9d92fd6a40c0a       metrics-server-84c5f94fbc-w72vc
	6a4f440e54303       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   bedc5f99e2d23       storage-provisioner
	b662f6217a7f0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   38da5754ac9d2       coredns-7c65d6cfc9-4zs69
	2a19a2f8241f5       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   d6e44ed8aed04       kube-proxy-7clnt
	b83af138a30ef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   e7f5dc9fe6f85       kube-scheduler-addons-738106
	7798fd88ce5cc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   0be3e9d8cb0a9       etcd-addons-738106
	c5040fb76a212       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   6fd03453e7f22       kube-controller-manager-addons-738106
	1f2191692905b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   9d01b0820052a       kube-apiserver-addons-738106
	
	
	==> coredns [b662f6217a7f0bc0a0c971c676f73ed08be6671f95781ffb3c005b09eaf3a3b4] <==
	[INFO] 10.244.0.20:40407 - 52780 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000156855s
	[INFO] 10.244.0.20:40407 - 40565 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000116322s
	[INFO] 10.244.0.20:40407 - 5196 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000127287s
	[INFO] 10.244.0.20:40407 - 18648 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000188977s
	[INFO] 10.244.0.20:56422 - 19771 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000077517s
	[INFO] 10.244.0.20:56422 - 48743 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056366s
	[INFO] 10.244.0.20:56422 - 23253 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035783s
	[INFO] 10.244.0.20:56422 - 14095 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074557s
	[INFO] 10.244.0.20:56422 - 21915 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031146s
	[INFO] 10.244.0.20:56422 - 55812 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029211s
	[INFO] 10.244.0.20:56422 - 33467 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000093008s
	[INFO] 10.244.0.20:59295 - 64074 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000228119s
	[INFO] 10.244.0.20:54865 - 37498 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056861s
	[INFO] 10.244.0.20:54865 - 58758 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000411119s
	[INFO] 10.244.0.20:54865 - 60839 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004472s
	[INFO] 10.244.0.20:54865 - 52847 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050944s
	[INFO] 10.244.0.20:54865 - 53576 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030213s
	[INFO] 10.244.0.20:54865 - 9189 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034466s
	[INFO] 10.244.0.20:54865 - 37942 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069325s
	[INFO] 10.244.0.20:59295 - 12524 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093544s
	[INFO] 10.244.0.20:59295 - 60035 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094961s
	[INFO] 10.244.0.20:59295 - 28257 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051312s
	[INFO] 10.244.0.20:59295 - 4962 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085218s
	[INFO] 10.244.0.20:59295 - 58542 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000226439s
	[INFO] 10.244.0.20:59295 - 36862 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000080774s
	
	
	==> describe nodes <==
	Name:               addons-738106
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-738106
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=addons-738106
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T17_34_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-738106
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:34:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-738106
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 17:50:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 17:46:59 +0000   Tue, 08 Oct 2024 17:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 17:46:59 +0000   Tue, 08 Oct 2024 17:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 17:46:59 +0000   Tue, 08 Oct 2024 17:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 17:46:59 +0000   Tue, 08 Oct 2024 17:34:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    addons-738106
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac2bf36f4d4c47babf58620ac692990b
	  System UUID:                ac2bf36f-4d4c-47ba-bf58-620ac692990b
	  Boot ID:                    e599bb5c-42a3-493c-a0fc-c38f314042f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-hkkxb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  headlamp                    headlamp-7b5c95b59d-tn9fh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-7c65d6cfc9-4zs69                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-738106                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-738106               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-738106      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-7clnt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-738106               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-w72vc            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-xzzz5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-738106 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-738106 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-738106 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-738106 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-738106 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-738106 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-738106 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-738106 event: Registered Node addons-738106 in Controller
	
	
	==> dmesg <==
	[  +6.486893] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.096290] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.239141] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.511294] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +4.556940] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.002396] kauditd_printk_skb: 133 callbacks suppressed
	[  +8.152319] kauditd_printk_skb: 75 callbacks suppressed
	[Oct 8 17:35] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.453453] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.291150] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.329215] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.160490] kauditd_printk_skb: 34 callbacks suppressed
	[ +16.451243] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.864803] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.968769] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 8 17:44] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.002286] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.400019] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.061617] kauditd_printk_skb: 43 callbacks suppressed
	[  +6.708490] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.168079] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.009426] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 8 17:45] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 8 17:46] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.188425] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [7798fd88ce5cc48b6849fc8544d9669192abd9af47c6166e1c4c1567af9eff2f] <==
	{"level":"warn","ts":"2024-10-08T17:44:17.955439Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T17:44:17.580374Z","time spent":"374.984396ms","remote":"127.0.0.1:39056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3496,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-5b584cc74-5ftt2\" mod_revision:2104 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-5b584cc74-5ftt2\" value_size:3427 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-5b584cc74-5ftt2\" > >"}
	{"level":"warn","ts":"2024-10-08T17:44:29.403052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.619052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-10-08T17:44:29.403781Z","caller":"traceutil/trace.go:171","msg":"trace[561843564] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2186; }","duration":"232.356493ms","start":"2024-10-08T17:44:29.171411Z","end":"2024-10-08T17:44:29.403768Z","steps":["trace[561843564] 'range keys from in-memory index tree'  (duration: 231.502291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.245315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:29.404510Z","caller":"traceutil/trace.go:171","msg":"trace[818453607] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2186; }","duration":"229.603167ms","start":"2024-10-08T17:44:29.174898Z","end":"2024-10-08T17:44:29.404501Z","steps":["trace[818453607] 'range keys from in-memory index tree'  (duration: 228.167673ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.147277ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:29.405298Z","caller":"traceutil/trace.go:171","msg":"trace[783401228] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2186; }","duration":"216.176661ms","start":"2024-10-08T17:44:29.189114Z","end":"2024-10-08T17:44:29.405291Z","steps":["trace[783401228] 'range keys from in-memory index tree'  (duration: 214.1407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.534198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/gadget/gadget-6c44ff6658\" ","response":"range_response_count:1 size:7285"}
	{"level":"info","ts":"2024-10-08T17:44:29.405375Z","caller":"traceutil/trace.go:171","msg":"trace[1399438501] range","detail":"{range_begin:/registry/controllerrevisions/gadget/gadget-6c44ff6658; range_end:; response_count:1; response_revision:2186; }","duration":"295.330393ms","start":"2024-10-08T17:44:29.110039Z","end":"2024-10-08T17:44:29.405370Z","steps":["trace[1399438501] 'range keys from in-memory index tree'  (duration: 293.467775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403600Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.962179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/gadget.kinvolk.io/traces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:29.405505Z","caller":"traceutil/trace.go:171","msg":"trace[1743803437] range","detail":"{range_begin:/registry/gadget.kinvolk.io/traces; range_end:; response_count:0; response_revision:2186; }","duration":"296.862444ms","start":"2024-10-08T17:44:29.108634Z","end":"2024-10-08T17:44:29.405497Z","steps":["trace[1743803437] 'range keys from in-memory index tree'  (duration: 294.935645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.040112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:588"}
	{"level":"info","ts":"2024-10-08T17:44:29.407335Z","caller":"traceutil/trace.go:171","msg":"trace[231416343] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:2186; }","duration":"298.753338ms","start":"2024-10-08T17:44:29.108573Z","end":"2024-10-08T17:44:29.407327Z","steps":["trace[231416343] 'range keys from in-memory index tree'  (duration: 294.929758ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:29.403649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"293.649745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-pr252\" ","response":"range_response_count:1 size:8385"}
	{"level":"info","ts":"2024-10-08T17:44:29.407451Z","caller":"traceutil/trace.go:171","msg":"trace[251154913] range","detail":"{range_begin:/registry/pods/gadget/gadget-pr252; range_end:; response_count:1; response_revision:2186; }","duration":"297.44885ms","start":"2024-10-08T17:44:29.109996Z","end":"2024-10-08T17:44:29.407445Z","steps":["trace[251154913] 'range keys from in-memory index tree'  (duration: 293.555627ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T17:44:34.294435Z","caller":"traceutil/trace.go:171","msg":"trace[1526887688] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2215; }","duration":"250.74649ms","start":"2024-10-08T17:44:34.043673Z","end":"2024-10-08T17:44:34.294420Z","steps":["trace[1526887688] 'process raft request'  (duration: 250.591942ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T17:44:34.296129Z","caller":"traceutil/trace.go:171","msg":"trace[1914669631] linearizableReadLoop","detail":"{readStateIndex:2370; appliedIndex:2368; }","duration":"123.843833ms","start":"2024-10-08T17:44:34.172214Z","end":"2024-10-08T17:44:34.296058Z","steps":["trace[1914669631] 'read index received'  (duration: 122.118024ms)","trace[1914669631] 'applied index is now lower than readState.Index'  (duration: 1.725048ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-08T17:44:34.296291Z","caller":"traceutil/trace.go:171","msg":"trace[1661852763] transaction","detail":"{read_only:false; response_revision:2216; number_of_response:1; }","duration":"242.132738ms","start":"2024-10-08T17:44:34.054148Z","end":"2024-10-08T17:44:34.296281Z","steps":["trace[1661852763] 'process raft request'  (duration: 241.684283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:34.296413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.182715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:34.296430Z","caller":"traceutil/trace.go:171","msg":"trace[301378954] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2216; }","duration":"124.216018ms","start":"2024-10-08T17:44:34.172210Z","end":"2024-10-08T17:44:34.296426Z","steps":["trace[301378954] 'agreement among raft nodes before linearized reading'  (duration: 124.170006ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T17:44:34.296512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.667688ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T17:44:34.296525Z","caller":"traceutil/trace.go:171","msg":"trace[1234633695] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2216; }","duration":"106.682361ms","start":"2024-10-08T17:44:34.189839Z","end":"2024-10-08T17:44:34.296521Z","steps":["trace[1234633695] 'agreement among raft nodes before linearized reading'  (duration: 106.658092ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T17:49:17.086664Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2102}
	{"level":"info","ts":"2024-10-08T17:49:17.107727Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2102,"took":"20.154058ms","hash":3160301563,"current-db-size-bytes":6746112,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":4526080,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-10-08T17:49:17.107828Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3160301563,"revision":2102,"compact-revision":1507}
	
	
	==> kernel <==
	 17:50:15 up 16 min,  0 users,  load average: 0.10, 0.21, 0.25
	Linux addons-738106 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f2191692905b5472d88ca6d9ac42b1bbb7ee5bfca3aca8d671f1bd3d85a7528] <==
	 > logger="UnhandledError"
	E1008 17:36:02.407113       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.19.223:443: connect: connection refused" logger="UnhandledError"
	E1008 17:36:02.410612       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.19.223:443: connect: connection refused" logger="UnhandledError"
	E1008 17:36:02.415894       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.19.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.19.223:443: connect: connection refused" logger="UnhandledError"
	I1008 17:36:02.480910       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1008 17:44:13.553406       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.137.147"}
	I1008 17:44:25.153781       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1008 17:44:25.323571       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.35.237"}
	I1008 17:44:29.087314       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1008 17:44:30.442496       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1008 17:44:42.239745       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1008 17:45:01.847768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.847832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.879808       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.879904       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.900361       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.900462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.902378       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.902669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 17:45:01.923826       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 17:45:01.923882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1008 17:45:02.901589       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1008 17:45:02.926529       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1008 17:45:03.049327       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1008 17:46:44.997149       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.122.115"}
	
	
	==> kube-controller-manager [c5040fb76a212d40caa68df41a108495636bb1c0f853349b1e168354eb96d607] <==
	E1008 17:47:50.974152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:48:16.591226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:48:16.591304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:48:18.351108       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:48:18.351159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:48:19.618710       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:48:19.618762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:48:30.902019       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:48:30.902117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:04.885612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:04.885716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:10.008823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:10.008892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:13.988481       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:13.988609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:16.876901       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:16.877013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:45.750726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:45.750895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:52.957357       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:52.957468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:49:53.976231       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:49:53.976281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1008 17:50:01.037747       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1008 17:50:01.037851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2a19a2f8241f593a00519179db559a5e3ca9d5de153687761205e7e36f70f2a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 17:34:27.295287       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 17:34:27.320256       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	E1008 17:34:27.320461       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 17:34:27.397583       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 17:34:27.397619       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 17:34:27.397642       1 server_linux.go:169] "Using iptables Proxier"
	I1008 17:34:27.401097       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 17:34:27.401400       1 server.go:483] "Version info" version="v1.31.1"
	I1008 17:34:27.401410       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 17:34:27.404350       1 config.go:199] "Starting service config controller"
	I1008 17:34:27.404360       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 17:34:27.404384       1 config.go:105] "Starting endpoint slice config controller"
	I1008 17:34:27.404565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 17:34:27.410322       1 config.go:328] "Starting node config controller"
	I1008 17:34:27.410346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 17:34:27.505060       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 17:34:27.505103       1 shared_informer.go:320] Caches are synced for service config
	I1008 17:34:27.513590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b83af138a30ef8feee9cf0e1ae03e0b35212c6a358e8e0c9c8801cdf06bb6671] <==
	W1008 17:34:18.593278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 17:34:18.593316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:18.594480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 17:34:18.597397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:18.597319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 17:34:18.597604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.401908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 17:34:19.401988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.414317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 17:34:19.414369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.570535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 17:34:19.571335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.698331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 17:34:19.698532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.713910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 17:34:19.713993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.808020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 17:34:19.809002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.808813       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 17:34:19.809164       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1008 17:34:19.812581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 17:34:19.812681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 17:34:19.892041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 17:34:19.892085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1008 17:34:22.674618       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 17:48:41 addons-738106 kubelet[1202]: E1008 17:48:41.781420    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409721780502594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:48:51 addons-738106 kubelet[1202]: E1008 17:48:51.785288    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409731784707894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:48:51 addons-738106 kubelet[1202]: E1008 17:48:51.785329    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409731784707894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:01 addons-738106 kubelet[1202]: E1008 17:49:01.788480    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409741788165663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:01 addons-738106 kubelet[1202]: E1008 17:49:01.788522    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409741788165663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:11 addons-738106 kubelet[1202]: E1008 17:49:11.791835    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409751791263419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:11 addons-738106 kubelet[1202]: E1008 17:49:11.792203    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409751791263419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:21 addons-738106 kubelet[1202]: E1008 17:49:21.433177    1202 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 17:49:21 addons-738106 kubelet[1202]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 17:49:21 addons-738106 kubelet[1202]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 17:49:21 addons-738106 kubelet[1202]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 17:49:21 addons-738106 kubelet[1202]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 17:49:21 addons-738106 kubelet[1202]: E1008 17:49:21.795080    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409761794653753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:21 addons-738106 kubelet[1202]: E1008 17:49:21.795143    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409761794653753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:22 addons-738106 kubelet[1202]: I1008 17:49:22.409135    1202 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 17:49:31 addons-738106 kubelet[1202]: E1008 17:49:31.797671    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409771797297226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:31 addons-738106 kubelet[1202]: E1008 17:49:31.797726    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409771797297226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:41 addons-738106 kubelet[1202]: E1008 17:49:41.801016    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409781800452104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:41 addons-738106 kubelet[1202]: E1008 17:49:41.801305    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409781800452104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:51 addons-738106 kubelet[1202]: E1008 17:49:51.804304    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409791803866492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:49:51 addons-738106 kubelet[1202]: E1008 17:49:51.804744    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409791803866492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:50:01 addons-738106 kubelet[1202]: E1008 17:50:01.807260    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409801806683216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:50:01 addons-738106 kubelet[1202]: E1008 17:50:01.807543    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409801806683216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:50:11 addons-738106 kubelet[1202]: E1008 17:50:11.810547    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409811810098551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 17:50:11 addons-738106 kubelet[1202]: E1008 17:50:11.811131    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728409811810098551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6a4f440e5430368e322869961c7d778f0a2028f272fb731db046a6a8d0ed1d2c] <==
	I1008 17:34:32.964672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 17:34:32.984895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 17:34:32.985036       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 17:34:33.005181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 17:34:33.005977       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-738106_23dc17f6-b769-4717-9723-cd1dbd61450c!
	I1008 17:34:33.006030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"612e888d-f28a-40d1-a9dd-6b1dfcd905af", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-738106_23dc17f6-b769-4717-9723-cd1dbd61450c became leader
	I1008 17:34:33.107035       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-738106_23dc17f6-b769-4717-9723-cd1dbd61450c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-738106 -n addons-738106
helpers_test.go:261: (dbg) Run:  kubectl --context addons-738106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (359.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-738106
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-738106: exit status 82 (2m0.450427894s)

                                                
                                                
-- stdout --
	* Stopping node "addons-738106"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-738106" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-738106
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-738106: exit status 11 (21.617612282s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-738106" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-738106
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-738106: exit status 11 (6.14365921s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-738106" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-738106
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-738106: exit status 11 (6.142762145s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-738106" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 image ls --format short --alsologtostderr: (2.290284831s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-922806 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-922806 image ls --format short --alsologtostderr:
I1008 17:57:04.543640  548003 out.go:345] Setting OutFile to fd 1 ...
I1008 17:57:04.543805  548003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:04.543821  548003 out.go:358] Setting ErrFile to fd 2...
I1008 17:57:04.543827  548003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:04.544121  548003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
I1008 17:57:04.544964  548003 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:04.545130  548003 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:04.545726  548003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:04.545791  548003 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:04.561270  548003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44135
I1008 17:57:04.561790  548003 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:04.562444  548003 main.go:141] libmachine: Using API Version  1
I1008 17:57:04.562474  548003 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:04.562932  548003 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:04.563148  548003 main.go:141] libmachine: (functional-922806) Calling .GetState
I1008 17:57:04.565333  548003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:04.565397  548003 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:04.580408  548003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44239
I1008 17:57:04.580891  548003 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:04.581400  548003 main.go:141] libmachine: Using API Version  1
I1008 17:57:04.581434  548003 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:04.581768  548003 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:04.581941  548003 main.go:141] libmachine: (functional-922806) Calling .DriverName
I1008 17:57:04.582134  548003 ssh_runner.go:195] Run: systemctl --version
I1008 17:57:04.582163  548003 main.go:141] libmachine: (functional-922806) Calling .GetSSHHostname
I1008 17:57:04.584636  548003 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:04.585013  548003 main.go:141] libmachine: (functional-922806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:4c:59", ip: ""} in network mk-functional-922806: {Iface:virbr1 ExpiryTime:2024-10-08 18:54:00 +0000 UTC Type:0 Mac:52:54:00:f7:4c:59 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-922806 Clientid:01:52:54:00:f7:4c:59}
I1008 17:57:04.585043  548003 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined IP address 192.168.39.244 and MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:04.585168  548003 main.go:141] libmachine: (functional-922806) Calling .GetSSHPort
I1008 17:57:04.585343  548003 main.go:141] libmachine: (functional-922806) Calling .GetSSHKeyPath
I1008 17:57:04.585505  548003 main.go:141] libmachine: (functional-922806) Calling .GetSSHUsername
I1008 17:57:04.585635  548003 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/functional-922806/id_rsa Username:docker}
I1008 17:57:04.696108  548003 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 17:57:06.774138  548003 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.077984329s)
W1008 17:57:06.774248  548003 cache_images.go:734] Failed to list images for profile functional-922806 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1008 17:57:06.753612    7792 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-10-08T17:57:06Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1008 17:57:06.774309  548003 main.go:141] libmachine: Making call to close driver server
I1008 17:57:06.774341  548003 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:06.774647  548003 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:06.774662  548003 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 17:57:06.774672  548003 main.go:141] libmachine: Making call to close driver server
I1008 17:57:06.774715  548003 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:06.774747  548003 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:06.775014  548003 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:06.775049  548003 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 node stop m02 -v=7 --alsologtostderr
E1008 18:01:49.148449  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:59.390119  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:02:19.871588  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:03:00.833084  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-094095 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.471615061s)

                                                
                                                
-- stdout --
	* Stopping node "ha-094095-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:01:48.652126  552914 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:01:48.652248  552914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:48.652260  552914 out.go:358] Setting ErrFile to fd 2...
	I1008 18:01:48.652265  552914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:01:48.652455  552914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:01:48.652719  552914 mustload.go:65] Loading cluster: ha-094095
	I1008 18:01:48.653079  552914 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:01:48.653094  552914 stop.go:39] StopHost: ha-094095-m02
	I1008 18:01:48.653449  552914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:01:48.653508  552914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:01:48.669027  552914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1008 18:01:48.669470  552914 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:01:48.670045  552914 main.go:141] libmachine: Using API Version  1
	I1008 18:01:48.670067  552914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:01:48.670504  552914 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:01:48.672842  552914 out.go:177] * Stopping node "ha-094095-m02"  ...
	I1008 18:01:48.674013  552914 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 18:01:48.674037  552914 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 18:01:48.674257  552914 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 18:01:48.674280  552914 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 18:01:48.677091  552914 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:01:48.677442  552914 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 18:01:48.677474  552914 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:01:48.677564  552914 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 18:01:48.677721  552914 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 18:01:48.677844  552914 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 18:01:48.677979  552914 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 18:01:48.766814  552914 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 18:01:48.819922  552914 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 18:01:48.873865  552914 main.go:141] libmachine: Stopping "ha-094095-m02"...
	I1008 18:01:48.873896  552914 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 18:01:48.875392  552914 main.go:141] libmachine: (ha-094095-m02) Calling .Stop
	I1008 18:01:48.878726  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 0/120
	I1008 18:01:49.880504  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 1/120
	I1008 18:01:50.881916  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 2/120
	I1008 18:01:51.883090  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 3/120
	I1008 18:01:52.884823  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 4/120
	I1008 18:01:53.886794  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 5/120
	I1008 18:01:54.888627  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 6/120
	I1008 18:01:55.889888  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 7/120
	I1008 18:01:56.891660  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 8/120
	I1008 18:01:57.893323  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 9/120
	I1008 18:01:58.895394  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 10/120
	I1008 18:01:59.896847  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 11/120
	I1008 18:02:00.898200  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 12/120
	I1008 18:02:01.899524  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 13/120
	I1008 18:02:02.901015  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 14/120
	I1008 18:02:03.903413  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 15/120
	I1008 18:02:04.905501  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 16/120
	I1008 18:02:05.906837  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 17/120
	I1008 18:02:06.908909  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 18/120
	I1008 18:02:07.910164  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 19/120
	I1008 18:02:08.912071  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 20/120
	I1008 18:02:09.913450  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 21/120
	I1008 18:02:10.914809  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 22/120
	I1008 18:02:11.916711  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 23/120
	I1008 18:02:12.918056  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 24/120
	I1008 18:02:13.919749  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 25/120
	I1008 18:02:14.920962  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 26/120
	I1008 18:02:15.922862  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 27/120
	I1008 18:02:16.923981  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 28/120
	I1008 18:02:17.925387  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 29/120
	I1008 18:02:18.927372  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 30/120
	I1008 18:02:19.928828  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 31/120
	I1008 18:02:20.929930  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 32/120
	I1008 18:02:21.931293  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 33/120
	I1008 18:02:22.932533  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 34/120
	I1008 18:02:23.934297  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 35/120
	I1008 18:02:24.935580  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 36/120
	I1008 18:02:25.936749  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 37/120
	I1008 18:02:26.937999  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 38/120
	I1008 18:02:27.939284  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 39/120
	I1008 18:02:28.941513  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 40/120
	I1008 18:02:29.942785  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 41/120
	I1008 18:02:30.944682  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 42/120
	I1008 18:02:31.946907  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 43/120
	I1008 18:02:32.948902  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 44/120
	I1008 18:02:33.950603  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 45/120
	I1008 18:02:34.952586  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 46/120
	I1008 18:02:35.954558  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 47/120
	I1008 18:02:36.956862  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 48/120
	I1008 18:02:37.957962  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 49/120
	I1008 18:02:38.959954  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 50/120
	I1008 18:02:39.961326  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 51/120
	I1008 18:02:40.962648  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 52/120
	I1008 18:02:41.964789  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 53/120
	I1008 18:02:42.966256  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 54/120
	I1008 18:02:43.968228  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 55/120
	I1008 18:02:44.969694  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 56/120
	I1008 18:02:45.970971  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 57/120
	I1008 18:02:46.972389  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 58/120
	I1008 18:02:47.974413  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 59/120
	I1008 18:02:48.976342  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 60/120
	I1008 18:02:49.977969  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 61/120
	I1008 18:02:50.979379  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 62/120
	I1008 18:02:51.980948  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 63/120
	I1008 18:02:52.982267  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 64/120
	I1008 18:02:53.984427  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 65/120
	I1008 18:02:54.986390  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 66/120
	I1008 18:02:55.987654  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 67/120
	I1008 18:02:56.990072  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 68/120
	I1008 18:02:57.991380  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 69/120
	I1008 18:02:58.993250  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 70/120
	I1008 18:02:59.994968  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 71/120
	I1008 18:03:00.997102  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 72/120
	I1008 18:03:01.998394  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 73/120
	I1008 18:03:02.999736  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 74/120
	I1008 18:03:04.001505  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 75/120
	I1008 18:03:05.003023  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 76/120
	I1008 18:03:06.004825  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 77/120
	I1008 18:03:07.006863  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 78/120
	I1008 18:03:08.008998  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 79/120
	I1008 18:03:09.010834  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 80/120
	I1008 18:03:10.012161  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 81/120
	I1008 18:03:11.013350  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 82/120
	I1008 18:03:12.015541  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 83/120
	I1008 18:03:13.016762  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 84/120
	I1008 18:03:14.018826  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 85/120
	I1008 18:03:15.020101  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 86/120
	I1008 18:03:16.021771  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 87/120
	I1008 18:03:17.023124  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 88/120
	I1008 18:03:18.024585  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 89/120
	I1008 18:03:19.026928  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 90/120
	I1008 18:03:20.028319  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 91/120
	I1008 18:03:21.029748  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 92/120
	I1008 18:03:22.031264  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 93/120
	I1008 18:03:23.032834  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 94/120
	I1008 18:03:24.034752  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 95/120
	I1008 18:03:25.036753  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 96/120
	I1008 18:03:26.038110  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 97/120
	I1008 18:03:27.039455  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 98/120
	I1008 18:03:28.040964  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 99/120
	I1008 18:03:29.042950  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 100/120
	I1008 18:03:30.044852  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 101/120
	I1008 18:03:31.046163  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 102/120
	I1008 18:03:32.047469  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 103/120
	I1008 18:03:33.049361  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 104/120
	I1008 18:03:34.051440  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 105/120
	I1008 18:03:35.052848  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 106/120
	I1008 18:03:36.054003  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 107/120
	I1008 18:03:37.055390  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 108/120
	I1008 18:03:38.057243  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 109/120
	I1008 18:03:39.059322  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 110/120
	I1008 18:03:40.060815  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 111/120
	I1008 18:03:41.062061  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 112/120
	I1008 18:03:42.063523  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 113/120
	I1008 18:03:43.064670  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 114/120
	I1008 18:03:44.066194  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 115/120
	I1008 18:03:45.067492  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 116/120
	I1008 18:03:46.068667  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 117/120
	I1008 18:03:47.070002  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 118/120
	I1008 18:03:48.071341  552914 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 119/120
	I1008 18:03:49.071984  552914 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1008 18:03:49.072156  552914 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-094095 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr: (18.722615977s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (1.351071241s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m03_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:57:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:57:18.946903  548894 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:57:18.947145  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947153  548894 out.go:358] Setting ErrFile to fd 2...
	I1008 17:57:18.947157  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947344  548894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:57:18.947912  548894 out.go:352] Setting JSON to false
	I1008 17:57:18.948876  548894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5991,"bootTime":1728404248,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:57:18.948933  548894 start.go:139] virtualization: kvm guest
	I1008 17:57:18.950969  548894 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:57:18.952033  548894 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:57:18.952082  548894 notify.go:220] Checking for updates...
	I1008 17:57:18.954369  548894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:57:18.955681  548894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:57:18.956842  548894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:18.957830  548894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:57:18.959069  548894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:57:18.960234  548894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:57:18.994761  548894 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 17:57:18.995800  548894 start.go:297] selected driver: kvm2
	I1008 17:57:18.995813  548894 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:57:18.995824  548894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:57:18.996586  548894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:18.996660  548894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:57:19.011273  548894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:57:19.011313  548894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:57:19.011548  548894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:57:19.011585  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:19.011625  548894 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 17:57:19.011636  548894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 17:57:19.011687  548894 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:19.011804  548894 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:19.013449  548894 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 17:57:19.014789  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:19.014817  548894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 17:57:19.014826  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:57:19.014907  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:57:19.014919  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:57:19.015263  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:19.015288  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json: {Name:mk4a4bbfc5e4991434a64e3c2f362f3acde8e751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:19.015419  548894 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:57:19.015446  548894 start.go:364] duration metric: took 15.142µs to acquireMachinesLock for "ha-094095"
	I1008 17:57:19.015463  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:57:19.015507  548894 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 17:57:19.017014  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:57:19.017133  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:57:19.017171  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:57:19.031391  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I1008 17:57:19.031835  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:57:19.032448  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:57:19.032468  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:57:19.032843  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:57:19.033048  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:19.033189  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:19.033336  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:57:19.033367  548894 client.go:168] LocalClient.Create starting
	I1008 17:57:19.033396  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:57:19.033427  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033446  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033499  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:57:19.033517  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033530  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033545  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:57:19.033558  548894 main.go:141] libmachine: (ha-094095) Calling .PreCreateCheck
	I1008 17:57:19.033903  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:19.034253  548894 main.go:141] libmachine: Creating machine...
	I1008 17:57:19.034267  548894 main.go:141] libmachine: (ha-094095) Calling .Create
	I1008 17:57:19.034420  548894 main.go:141] libmachine: (ha-094095) Creating KVM machine...
	I1008 17:57:19.035565  548894 main.go:141] libmachine: (ha-094095) DBG | found existing default KVM network
	I1008 17:57:19.036249  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.036120  548918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1008 17:57:19.036283  548894 main.go:141] libmachine: (ha-094095) DBG | created network xml: 
	I1008 17:57:19.036302  548894 main.go:141] libmachine: (ha-094095) DBG | <network>
	I1008 17:57:19.036314  548894 main.go:141] libmachine: (ha-094095) DBG |   <name>mk-ha-094095</name>
	I1008 17:57:19.036323  548894 main.go:141] libmachine: (ha-094095) DBG |   <dns enable='no'/>
	I1008 17:57:19.036331  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036342  548894 main.go:141] libmachine: (ha-094095) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 17:57:19.036349  548894 main.go:141] libmachine: (ha-094095) DBG |     <dhcp>
	I1008 17:57:19.036361  548894 main.go:141] libmachine: (ha-094095) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 17:57:19.036370  548894 main.go:141] libmachine: (ha-094095) DBG |     </dhcp>
	I1008 17:57:19.036386  548894 main.go:141] libmachine: (ha-094095) DBG |   </ip>
	I1008 17:57:19.036427  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036447  548894 main.go:141] libmachine: (ha-094095) DBG | </network>
	I1008 17:57:19.036455  548894 main.go:141] libmachine: (ha-094095) DBG | 
	I1008 17:57:19.041263  548894 main.go:141] libmachine: (ha-094095) DBG | trying to create private KVM network mk-ha-094095 192.168.39.0/24...
	I1008 17:57:19.105180  548894 main.go:141] libmachine: (ha-094095) DBG | private KVM network mk-ha-094095 192.168.39.0/24 created
	I1008 17:57:19.105208  548894 main.go:141] libmachine: (ha-094095) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.105220  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.105167  548918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.105237  548894 main.go:141] libmachine: (ha-094095) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:57:19.105263  548894 main.go:141] libmachine: (ha-094095) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:57:19.385345  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.385226  548918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa...
	I1008 17:57:19.617977  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617838  548918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk...
	I1008 17:57:19.618008  548894 main.go:141] libmachine: (ha-094095) DBG | Writing magic tar header
	I1008 17:57:19.618021  548894 main.go:141] libmachine: (ha-094095) DBG | Writing SSH key tar header
	I1008 17:57:19.618031  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617973  548918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.618141  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095
	I1008 17:57:19.618165  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 (perms=drwx------)
	I1008 17:57:19.618171  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:57:19.618178  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:57:19.618187  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:57:19.618193  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:57:19.618199  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.618206  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:57:19.618211  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:57:19.618216  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:57:19.618224  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:57:19.618231  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home
	I1008 17:57:19.618238  548894 main.go:141] libmachine: (ha-094095) DBG | Skipping /home - not owner
	I1008 17:57:19.618249  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:57:19.618261  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:19.619347  548894 main.go:141] libmachine: (ha-094095) define libvirt domain using xml: 
	I1008 17:57:19.619369  548894 main.go:141] libmachine: (ha-094095) <domain type='kvm'>
	I1008 17:57:19.619378  548894 main.go:141] libmachine: (ha-094095)   <name>ha-094095</name>
	I1008 17:57:19.619388  548894 main.go:141] libmachine: (ha-094095)   <memory unit='MiB'>2200</memory>
	I1008 17:57:19.619396  548894 main.go:141] libmachine: (ha-094095)   <vcpu>2</vcpu>
	I1008 17:57:19.619402  548894 main.go:141] libmachine: (ha-094095)   <features>
	I1008 17:57:19.619410  548894 main.go:141] libmachine: (ha-094095)     <acpi/>
	I1008 17:57:19.619420  548894 main.go:141] libmachine: (ha-094095)     <apic/>
	I1008 17:57:19.619427  548894 main.go:141] libmachine: (ha-094095)     <pae/>
	I1008 17:57:19.619444  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619470  548894 main.go:141] libmachine: (ha-094095)   </features>
	I1008 17:57:19.619484  548894 main.go:141] libmachine: (ha-094095)   <cpu mode='host-passthrough'>
	I1008 17:57:19.619491  548894 main.go:141] libmachine: (ha-094095)   
	I1008 17:57:19.619500  548894 main.go:141] libmachine: (ha-094095)   </cpu>
	I1008 17:57:19.619506  548894 main.go:141] libmachine: (ha-094095)   <os>
	I1008 17:57:19.619515  548894 main.go:141] libmachine: (ha-094095)     <type>hvm</type>
	I1008 17:57:19.619527  548894 main.go:141] libmachine: (ha-094095)     <boot dev='cdrom'/>
	I1008 17:57:19.619536  548894 main.go:141] libmachine: (ha-094095)     <boot dev='hd'/>
	I1008 17:57:19.619547  548894 main.go:141] libmachine: (ha-094095)     <bootmenu enable='no'/>
	I1008 17:57:19.619559  548894 main.go:141] libmachine: (ha-094095)   </os>
	I1008 17:57:19.619569  548894 main.go:141] libmachine: (ha-094095)   <devices>
	I1008 17:57:19.619578  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='cdrom'>
	I1008 17:57:19.619590  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/boot2docker.iso'/>
	I1008 17:57:19.619601  548894 main.go:141] libmachine: (ha-094095)       <target dev='hdc' bus='scsi'/>
	I1008 17:57:19.619612  548894 main.go:141] libmachine: (ha-094095)       <readonly/>
	I1008 17:57:19.619621  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619648  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='disk'>
	I1008 17:57:19.619669  548894 main.go:141] libmachine: (ha-094095)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:57:19.619678  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk'/>
	I1008 17:57:19.619688  548894 main.go:141] libmachine: (ha-094095)       <target dev='hda' bus='virtio'/>
	I1008 17:57:19.619694  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619711  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619719  548894 main.go:141] libmachine: (ha-094095)       <source network='mk-ha-094095'/>
	I1008 17:57:19.619724  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619731  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619735  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619743  548894 main.go:141] libmachine: (ha-094095)       <source network='default'/>
	I1008 17:57:19.619747  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619752  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619756  548894 main.go:141] libmachine: (ha-094095)     <serial type='pty'>
	I1008 17:57:19.619763  548894 main.go:141] libmachine: (ha-094095)       <target port='0'/>
	I1008 17:57:19.619769  548894 main.go:141] libmachine: (ha-094095)     </serial>
	I1008 17:57:19.619798  548894 main.go:141] libmachine: (ha-094095)     <console type='pty'>
	I1008 17:57:19.619831  548894 main.go:141] libmachine: (ha-094095)       <target type='serial' port='0'/>
	I1008 17:57:19.619844  548894 main.go:141] libmachine: (ha-094095)     </console>
	I1008 17:57:19.619859  548894 main.go:141] libmachine: (ha-094095)     <rng model='virtio'>
	I1008 17:57:19.619885  548894 main.go:141] libmachine: (ha-094095)       <backend model='random'>/dev/random</backend>
	I1008 17:57:19.619895  548894 main.go:141] libmachine: (ha-094095)     </rng>
	I1008 17:57:19.619903  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619912  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619921  548894 main.go:141] libmachine: (ha-094095)   </devices>
	I1008 17:57:19.619930  548894 main.go:141] libmachine: (ha-094095) </domain>
	I1008 17:57:19.619943  548894 main.go:141] libmachine: (ha-094095) 
	I1008 17:57:19.623957  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:c2:1c:c1 in network default
	I1008 17:57:19.624533  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:19.624567  548894 main.go:141] libmachine: (ha-094095) Ensuring networks are active...
	I1008 17:57:19.625167  548894 main.go:141] libmachine: (ha-094095) Ensuring network default is active
	I1008 17:57:19.625513  548894 main.go:141] libmachine: (ha-094095) Ensuring network mk-ha-094095 is active
	I1008 17:57:19.626008  548894 main.go:141] libmachine: (ha-094095) Getting domain xml...
	I1008 17:57:19.626619  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:20.795900  548894 main.go:141] libmachine: (ha-094095) Waiting to get IP...
	I1008 17:57:20.796661  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:20.797068  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:20.797096  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:20.797046  548918 retry.go:31] will retry after 205.911312ms: waiting for machine to come up
	I1008 17:57:21.004526  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.004999  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.005029  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.004943  548918 retry.go:31] will retry after 273.425618ms: waiting for machine to come up
	I1008 17:57:21.280506  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.280861  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.280894  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.280804  548918 retry.go:31] will retry after 435.479274ms: waiting for machine to come up
	I1008 17:57:21.717289  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.717636  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.717662  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.717595  548918 retry.go:31] will retry after 576.307625ms: waiting for machine to come up
	I1008 17:57:22.295076  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.295499  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.295527  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.295461  548918 retry.go:31] will retry after 636.373654ms: waiting for machine to come up
	I1008 17:57:22.933047  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.933364  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.933391  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.933317  548918 retry.go:31] will retry after 741.414571ms: waiting for machine to come up
	I1008 17:57:23.676038  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:23.676368  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:23.676441  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:23.676362  548918 retry.go:31] will retry after 726.748749ms: waiting for machine to come up
	I1008 17:57:24.404401  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:24.404771  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:24.404801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:24.404726  548918 retry.go:31] will retry after 1.449573768s: waiting for machine to come up
	I1008 17:57:25.856490  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:25.856930  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:25.856961  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:25.856877  548918 retry.go:31] will retry after 1.340937339s: waiting for machine to come up
	I1008 17:57:27.199433  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:27.199826  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:27.199863  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:27.199804  548918 retry.go:31] will retry after 1.798441674s: waiting for machine to come up
	I1008 17:57:28.999424  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:28.999921  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:28.999945  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:28.999873  548918 retry.go:31] will retry after 1.937304185s: waiting for machine to come up
	I1008 17:57:30.939309  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:30.939791  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:30.939819  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:30.939738  548918 retry.go:31] will retry after 3.500432638s: waiting for machine to come up
	I1008 17:57:34.441923  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:34.442356  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:34.442385  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:34.442290  548918 retry.go:31] will retry after 3.09089187s: waiting for machine to come up
	I1008 17:57:37.536439  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:37.536781  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:37.536801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:37.536736  548918 retry.go:31] will retry after 5.395822577s: waiting for machine to come up
	I1008 17:57:42.937057  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937477  548894 main.go:141] libmachine: (ha-094095) Found IP for machine: 192.168.39.99
	I1008 17:57:42.937503  548894 main.go:141] libmachine: (ha-094095) Reserving static IP address...
	I1008 17:57:42.937532  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has current primary IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937886  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find host DHCP lease matching {name: "ha-094095", mac: "52:54:00:bf:fa:3a", ip: "192.168.39.99"} in network mk-ha-094095
	I1008 17:57:43.006083  548894 main.go:141] libmachine: (ha-094095) DBG | Getting to WaitForSSH function...
	I1008 17:57:43.006114  548894 main.go:141] libmachine: (ha-094095) Reserved static IP address: 192.168.39.99
	I1008 17:57:43.006128  548894 main.go:141] libmachine: (ha-094095) Waiting for SSH to be available...
	I1008 17:57:43.008468  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.008879  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.008907  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.009020  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH client type: external
	I1008 17:57:43.009041  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa (-rw-------)
	I1008 17:57:43.009062  548894 main.go:141] libmachine: (ha-094095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:57:43.009119  548894 main.go:141] libmachine: (ha-094095) DBG | About to run SSH command:
	I1008 17:57:43.009138  548894 main.go:141] libmachine: (ha-094095) DBG | exit 0
	I1008 17:57:43.130112  548894 main.go:141] libmachine: (ha-094095) DBG | SSH cmd err, output: <nil>: 
	I1008 17:57:43.130367  548894 main.go:141] libmachine: (ha-094095) KVM machine creation complete!
	I1008 17:57:43.130653  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:43.131203  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131384  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131553  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:57:43.131567  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:57:43.132696  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:57:43.132710  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:57:43.132718  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:57:43.132724  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.134855  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135157  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.135186  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135341  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.135500  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135635  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135753  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.135900  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.136116  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.136132  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:57:43.237532  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.237562  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:57:43.237573  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.240102  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240361  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.240386  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240541  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.240728  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.240888  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.241033  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.241194  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.241372  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.241387  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:57:43.342754  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:57:43.342848  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:57:43.342862  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:57:43.342875  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343129  548894 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 17:57:43.343169  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343355  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.345781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346150  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.346172  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346401  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.346572  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346747  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346898  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.347071  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.347247  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.347259  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 17:57:43.463654  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 17:57:43.463696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.466255  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466646  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.466682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466840  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.467010  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467143  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467243  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.467378  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.467581  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.467603  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:57:43.579438  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.579474  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:57:43.579515  548894 buildroot.go:174] setting up certificates
	I1008 17:57:43.579525  548894 provision.go:84] configureAuth start
	I1008 17:57:43.579536  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.579814  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:43.582136  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582503  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.582528  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.584820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585187  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.585207  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585310  548894 provision.go:143] copyHostCerts
	I1008 17:57:43.585352  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585401  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:57:43.585412  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585494  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:57:43.585624  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585659  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:57:43.585677  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585716  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:57:43.585797  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585818  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:57:43.585827  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585862  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:57:43.585945  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 17:57:43.673469  548894 provision.go:177] copyRemoteCerts
	I1008 17:57:43.673538  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:57:43.673570  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.676617  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.676907  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.676942  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.677124  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.677287  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.677489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.677596  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:43.759344  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:57:43.759416  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 17:57:43.781917  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:57:43.781981  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:57:43.804256  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:57:43.804312  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:57:43.826921  548894 provision.go:87] duration metric: took 247.384803ms to configureAuth
	I1008 17:57:43.826944  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:57:43.827107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:57:43.827185  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.830340  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830654  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.830685  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830917  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.831091  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831234  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831362  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.831590  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.831761  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.831775  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:57:44.043562  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:57:44.043593  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:57:44.043602  548894 main.go:141] libmachine: (ha-094095) Calling .GetURL
	I1008 17:57:44.044870  548894 main.go:141] libmachine: (ha-094095) DBG | Using libvirt version 6000000
	I1008 17:57:44.047119  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047449  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.047478  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047637  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:57:44.047652  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:57:44.047661  548894 client.go:171] duration metric: took 25.014282218s to LocalClient.Create
	I1008 17:57:44.047690  548894 start.go:167] duration metric: took 25.014354001s to libmachine.API.Create "ha-094095"
	I1008 17:57:44.047702  548894 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 17:57:44.047716  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:57:44.047739  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.048014  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:57:44.048045  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.050022  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050306  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.050347  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050505  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.050666  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.050837  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.050949  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.132504  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:57:44.136621  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:57:44.136645  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:57:44.136713  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:57:44.136806  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:57:44.136818  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:57:44.136924  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:57:44.146103  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:44.168356  548894 start.go:296] duration metric: took 120.640584ms for postStartSetup
	I1008 17:57:44.168411  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:44.169087  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.172425  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.172799  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.172823  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.173056  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:44.173256  548894 start.go:128] duration metric: took 25.157738621s to createHost
	I1008 17:57:44.173281  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.175394  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175685  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.175711  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175872  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.176022  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176162  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176257  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.176381  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:44.176571  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:44.176587  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:57:44.278668  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410264.248509692
	
	I1008 17:57:44.278691  548894 fix.go:216] guest clock: 1728410264.248509692
	I1008 17:57:44.278710  548894 fix.go:229] Guest: 2024-10-08 17:57:44.248509692 +0000 UTC Remote: 2024-10-08 17:57:44.173269639 +0000 UTC m=+25.264229848 (delta=75.240053ms)
	I1008 17:57:44.278730  548894 fix.go:200] guest clock delta is within tolerance: 75.240053ms
	I1008 17:57:44.278735  548894 start.go:83] releasing machines lock for "ha-094095", held for 25.26328044s
	I1008 17:57:44.278761  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.279011  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.281403  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281704  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.281728  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281844  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282331  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282492  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282608  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:57:44.282649  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.282695  548894 ssh_runner.go:195] Run: cat /version.json
	I1008 17:57:44.282718  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.285197  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285467  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285561  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285596  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285720  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.285878  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.285947  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285972  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.286009  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286152  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.286166  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.286407  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.286555  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286685  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.362923  548894 ssh_runner.go:195] Run: systemctl --version
	I1008 17:57:44.382917  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:57:44.543848  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:57:44.549734  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:57:44.549799  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:57:44.566434  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:57:44.566456  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:57:44.566531  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:57:44.582382  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:57:44.595796  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:57:44.595845  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:57:44.608932  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:57:44.621723  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:57:44.737514  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:57:44.894846  548894 docker.go:233] disabling docker service ...
	I1008 17:57:44.894913  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:57:44.908802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:57:44.920944  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:57:45.040515  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:57:45.156709  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:57:45.170339  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:57:45.188088  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:57:45.188162  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.197887  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:57:45.197965  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.207765  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.217192  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.226820  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:57:45.236401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.246021  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.261908  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.271409  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:57:45.280221  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:57:45.280279  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:57:45.293099  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:57:45.301781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:45.406440  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:57:45.492188  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:57:45.492292  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:57:45.496696  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:57:45.496749  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:57:45.500380  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:57:45.538828  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:57:45.538916  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.566412  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.594012  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:57:45.595183  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:45.597820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598135  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:45.598169  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598406  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:57:45.602368  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:45.614968  548894 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 17:57:45.615076  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:45.615144  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:45.645417  548894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 17:57:45.645488  548894 ssh_runner.go:195] Run: which lz4
	I1008 17:57:45.649242  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1008 17:57:45.649331  548894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 17:57:45.653358  548894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 17:57:45.653398  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 17:57:46.900415  548894 crio.go:462] duration metric: took 1.251111162s to copy over tarball
	I1008 17:57:46.900502  548894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 17:57:48.824951  548894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.92441022s)
	I1008 17:57:48.824989  548894 crio.go:469] duration metric: took 1.924546326s to extract the tarball
	I1008 17:57:48.825000  548894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 17:57:48.862916  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:48.914586  548894 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 17:57:48.914611  548894 cache_images.go:84] Images are preloaded, skipping loading
	I1008 17:57:48.914620  548894 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 17:57:48.914713  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:57:48.914782  548894 ssh_runner.go:195] Run: crio config
	I1008 17:57:48.965231  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:48.965254  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:57:48.965272  548894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 17:57:48.965293  548894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 17:57:48.965430  548894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 17:57:48.965457  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:57:48.965957  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:57:48.984862  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:57:48.984960  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:57:48.985020  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:57:48.994069  548894 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 17:57:48.994134  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 17:57:49.003013  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 17:57:49.018952  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:57:49.034270  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 17:57:49.049856  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1008 17:57:49.065212  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:57:49.068890  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:49.080238  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:49.207273  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:57:49.224685  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 17:57:49.224709  548894 certs.go:194] generating shared ca certs ...
	I1008 17:57:49.224731  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.224901  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:57:49.224958  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:57:49.224972  548894 certs.go:256] generating profile certs ...
	I1008 17:57:49.225044  548894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:57:49.225073  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt with IP's: []
	I1008 17:57:49.321305  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt ...
	I1008 17:57:49.321342  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt: {Name:mkc9007ec871f6b1b480e3b611a05707a64a5848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321530  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key ...
	I1008 17:57:49.321546  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key: {Name:mke9b241dc151acd2e67df3e03efa92395ed4dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321647  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc
	I1008 17:57:49.321666  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.254]
	I1008 17:57:49.615476  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc ...
	I1008 17:57:49.615508  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc: {Name:mk28ddc8f9cdc62c03babb0aa78423717078ec15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615696  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc ...
	I1008 17:57:49.615715  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc: {Name:mk7165300ee0dd42df7c546caae76a339625e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615817  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:57:49.615941  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:57:49.616029  548894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:57:49.616053  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt with IP's: []
	I1008 17:57:49.700382  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt ...
	I1008 17:57:49.700415  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt: {Name:mk23273db76b4a6b0f12257e27a1a06fa6830ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700587  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key ...
	I1008 17:57:49.700602  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key: {Name:mk0eecaa249eaee41f1ee6112c7eb1585a4e7c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700724  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:57:49.700753  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:57:49.700768  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:57:49.700784  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:57:49.700811  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:57:49.700836  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:57:49.700855  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:57:49.700874  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:57:49.700934  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:57:49.700987  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:57:49.701002  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:57:49.701037  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:57:49.701072  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:57:49.701103  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:57:49.701155  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:49.701193  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:49.701232  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:57:49.701259  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:57:49.701875  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:57:49.727666  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:57:49.750886  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:57:49.773442  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:57:49.797562  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 17:57:49.820463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:57:49.843011  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:57:49.866615  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:57:49.889741  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:57:49.912810  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:57:49.936333  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:57:49.960454  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 17:57:49.979469  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:57:49.985669  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:57:49.997465  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003200  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003257  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.009543  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:57:50.024695  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:57:50.038764  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044608  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044730  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.050835  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:57:50.061168  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:57:50.071347  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075705  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075749  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.081172  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:57:50.091550  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:57:50.095476  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:57:50.095534  548894 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:50.095625  548894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 17:57:50.095693  548894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 17:57:50.141057  548894 cri.go:89] found id: ""
	I1008 17:57:50.141128  548894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 17:57:50.155661  548894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 17:57:50.164965  548894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 17:57:50.174132  548894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 17:57:50.174150  548894 kubeadm.go:157] found existing configuration files:
	
	I1008 17:57:50.174193  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 17:57:50.182760  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 17:57:50.182801  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 17:57:50.191921  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 17:57:50.200321  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 17:57:50.200379  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 17:57:50.209419  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.217728  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 17:57:50.217774  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.226543  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 17:57:50.234817  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 17:57:50.234864  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 17:57:50.243553  548894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 17:57:50.351407  548894 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 17:57:50.351505  548894 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 17:57:50.448058  548894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 17:57:50.448219  548894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 17:57:50.448390  548894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 17:57:50.458228  548894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 17:57:50.561945  548894 out.go:235]   - Generating certificates and keys ...
	I1008 17:57:50.562071  548894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 17:57:50.562160  548894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 17:57:50.581396  548894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 17:57:50.643567  548894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 17:57:50.777590  548894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 17:57:50.908209  548894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 17:57:51.030015  548894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 17:57:51.030180  548894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.147196  548894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 17:57:51.147407  548894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.301954  548894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 17:57:51.401522  548894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 17:57:51.537212  548894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 17:57:51.537477  548894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 17:57:51.996984  548894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 17:57:52.232782  548894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 17:57:52.360403  548894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 17:57:52.550793  548894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 17:57:52.645896  548894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 17:57:52.646431  548894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 17:57:52.649705  548894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 17:57:52.693095  548894 out.go:235]   - Booting up control plane ...
	I1008 17:57:52.693231  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 17:57:52.693301  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 17:57:52.693399  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 17:57:52.693595  548894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 17:57:52.693726  548894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 17:57:52.693765  548894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 17:57:52.808206  548894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 17:57:52.808366  548894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 17:57:53.309429  548894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.545044ms
	I1008 17:57:53.309511  548894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 17:57:59.231916  548894 kubeadm.go:310] [api-check] The API server is healthy after 5.925563733s
	I1008 17:57:59.243298  548894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 17:57:59.259662  548894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 17:57:59.788254  548894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 17:57:59.788485  548894 kubeadm.go:310] [mark-control-plane] Marking the node ha-094095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 17:57:59.797286  548894 kubeadm.go:310] [bootstrap-token] Using token: 3mfy3k.85hms8dtl8svlvkm
	I1008 17:57:59.798387  548894 out.go:235]   - Configuring RBAC rules ...
	I1008 17:57:59.798518  548894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 17:57:59.805485  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 17:57:59.816460  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 17:57:59.820883  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 17:57:59.823643  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 17:57:59.826562  548894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 17:57:59.838159  548894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 17:58:00.096325  548894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 17:58:00.637130  548894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 17:58:00.638100  548894 kubeadm.go:310] 
	I1008 17:58:00.638187  548894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 17:58:00.638198  548894 kubeadm.go:310] 
	I1008 17:58:00.638289  548894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 17:58:00.638337  548894 kubeadm.go:310] 
	I1008 17:58:00.638388  548894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 17:58:00.638476  548894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 17:58:00.638558  548894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 17:58:00.638573  548894 kubeadm.go:310] 
	I1008 17:58:00.638644  548894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 17:58:00.638654  548894 kubeadm.go:310] 
	I1008 17:58:00.638715  548894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 17:58:00.638725  548894 kubeadm.go:310] 
	I1008 17:58:00.638784  548894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 17:58:00.638864  548894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 17:58:00.638920  548894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 17:58:00.638927  548894 kubeadm.go:310] 
	I1008 17:58:00.638996  548894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 17:58:00.639061  548894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 17:58:00.639067  548894 kubeadm.go:310] 
	I1008 17:58:00.639138  548894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639257  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 17:58:00.639298  548894 kubeadm.go:310] 	--control-plane 
	I1008 17:58:00.639308  548894 kubeadm.go:310] 
	I1008 17:58:00.639444  548894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 17:58:00.639453  548894 kubeadm.go:310] 
	I1008 17:58:00.639547  548894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639692  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 17:58:00.640765  548894 kubeadm.go:310] W1008 17:57:50.322627     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.640999  548894 kubeadm.go:310] W1008 17:57:50.323512     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.641121  548894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 17:58:00.641159  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:58:00.641169  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:58:00.643434  548894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1008 17:58:00.644444  548894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 17:58:00.650209  548894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1008 17:58:00.650224  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 17:58:00.677687  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 17:58:01.011782  548894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 17:58:01.011872  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.011918  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095 minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=true
	I1008 17:58:01.050127  548894 ops.go:34] apiserver oom_adj: -16
	I1008 17:58:01.121355  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.622435  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.121789  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.621637  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.121512  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.621993  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.121641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.621728  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.753917  548894 kubeadm.go:1113] duration metric: took 3.742110374s to wait for elevateKubeSystemPrivileges
	I1008 17:58:04.753962  548894 kubeadm.go:394] duration metric: took 14.658436547s to StartCluster
	I1008 17:58:04.753985  548894 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.754071  548894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.755006  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.755245  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 17:58:04.755258  548894 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:04.755285  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:58:04.755305  548894 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 17:58:04.755395  548894 addons.go:69] Setting storage-provisioner=true in profile "ha-094095"
	I1008 17:58:04.755421  548894 addons.go:234] Setting addon storage-provisioner=true in "ha-094095"
	I1008 17:58:04.755423  548894 addons.go:69] Setting default-storageclass=true in profile "ha-094095"
	I1008 17:58:04.755450  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.755463  548894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-094095"
	I1008 17:58:04.755954  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:04.756015  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756060  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.756153  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756178  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.771314  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I1008 17:58:04.771411  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1008 17:58:04.771715  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.771845  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.772259  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772280  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772399  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772421  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772677  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772761  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772921  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.773166  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.773207  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.775127  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.775464  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 17:58:04.776098  548894 cert_rotation.go:140] Starting client certificate rotation controller
	I1008 17:58:04.776464  548894 addons.go:234] Setting addon default-storageclass=true in "ha-094095"
	I1008 17:58:04.776513  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.776901  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.776950  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.788872  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I1008 17:58:04.789408  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.789954  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.789982  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.790391  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.790585  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.791166  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1008 17:58:04.791602  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.792075  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.792102  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.792300  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.792437  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.792883  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.792921  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.794070  548894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 17:58:04.795292  548894 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:04.795314  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 17:58:04.795333  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.798275  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798778  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.798817  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798959  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.799152  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.799319  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.799447  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.807217  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1008 17:58:04.807681  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.808084  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.808108  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.808466  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.808664  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.810084  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.810282  548894 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:04.810305  548894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 17:58:04.810351  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.813002  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813401  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.813426  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813628  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.813798  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.813951  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.814091  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.894935  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 17:58:04.989822  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:05.005242  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:05.480020  548894 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 17:58:05.749086  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749116  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749148  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749170  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749410  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749425  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749434  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749440  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749521  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749536  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749550  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749557  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749608  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749908  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749943  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750036  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749970  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.750103  548894 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 17:58:05.749988  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750114  548894 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 17:58:05.750160  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.750219  548894 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1008 17:58:05.750231  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.750241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.750250  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.762332  548894 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1008 17:58:05.763152  548894 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1008 17:58:05.763172  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.763185  548894 round_trippers.go:473]     Content-Type: application/json
	I1008 17:58:05.763193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.763197  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.765314  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:58:05.765554  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.765571  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.765856  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.765872  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.765886  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.768201  548894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1008 17:58:05.769166  548894 addons.go:510] duration metric: took 1.013864152s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 17:58:05.769206  548894 start.go:246] waiting for cluster config update ...
	I1008 17:58:05.769221  548894 start.go:255] writing updated cluster config ...
	I1008 17:58:05.770624  548894 out.go:201] 
	I1008 17:58:05.771889  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:05.771979  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.773435  548894 out.go:177] * Starting "ha-094095-m02" control-plane node in "ha-094095" cluster
	I1008 17:58:05.774389  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:58:05.774416  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:58:05.774517  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:58:05.774543  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:58:05.774635  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.774827  548894 start.go:360] acquireMachinesLock for ha-094095-m02: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:58:05.774885  548894 start.go:364] duration metric: took 34.657µs to acquireMachinesLock for "ha-094095-m02"
	I1008 17:58:05.774908  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:05.775005  548894 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1008 17:58:05.776351  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:58:05.776440  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:05.776482  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:05.791492  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I1008 17:58:05.791992  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:05.792464  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:05.792487  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:05.792786  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:05.792949  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:05.793054  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:05.793160  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:58:05.793192  548894 client.go:168] LocalClient.Create starting
	I1008 17:58:05.793230  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:58:05.793268  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793289  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793356  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:58:05.793382  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793399  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793425  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:58:05.793436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .PreCreateCheck
	I1008 17:58:05.793636  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:05.793961  548894 main.go:141] libmachine: Creating machine...
	I1008 17:58:05.793974  548894 main.go:141] libmachine: (ha-094095-m02) Calling .Create
	I1008 17:58:05.794087  548894 main.go:141] libmachine: (ha-094095-m02) Creating KVM machine...
	I1008 17:58:05.795174  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing default KVM network
	I1008 17:58:05.795373  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing private KVM network mk-ha-094095
	I1008 17:58:05.795488  548894 main.go:141] libmachine: (ha-094095-m02) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:05.795518  548894 main.go:141] libmachine: (ha-094095-m02) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:58:05.795590  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:05.795498  549282 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:05.795693  548894 main.go:141] libmachine: (ha-094095-m02) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:58:06.080254  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.080126  549282 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa...
	I1008 17:58:06.408665  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408546  549282 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk...
	I1008 17:58:06.408701  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing magic tar header
	I1008 17:58:06.408716  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing SSH key tar header
	I1008 17:58:06.408729  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408669  549282 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:06.408798  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02
	I1008 17:58:06.408863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:58:06.408916  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 (perms=drwx------)
	I1008 17:58:06.408935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:06.408946  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:58:06.408954  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:58:06.408966  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:58:06.408972  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home
	I1008 17:58:06.408988  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Skipping /home - not owner
	I1008 17:58:06.409003  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:58:06.409013  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:58:06.409022  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:58:06.409038  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:58:06.409050  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:58:06.409060  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:06.410262  548894 main.go:141] libmachine: (ha-094095-m02) define libvirt domain using xml: 
	I1008 17:58:06.410280  548894 main.go:141] libmachine: (ha-094095-m02) <domain type='kvm'>
	I1008 17:58:06.410300  548894 main.go:141] libmachine: (ha-094095-m02)   <name>ha-094095-m02</name>
	I1008 17:58:06.410310  548894 main.go:141] libmachine: (ha-094095-m02)   <memory unit='MiB'>2200</memory>
	I1008 17:58:06.410330  548894 main.go:141] libmachine: (ha-094095-m02)   <vcpu>2</vcpu>
	I1008 17:58:06.410344  548894 main.go:141] libmachine: (ha-094095-m02)   <features>
	I1008 17:58:06.410353  548894 main.go:141] libmachine: (ha-094095-m02)     <acpi/>
	I1008 17:58:06.410361  548894 main.go:141] libmachine: (ha-094095-m02)     <apic/>
	I1008 17:58:06.410367  548894 main.go:141] libmachine: (ha-094095-m02)     <pae/>
	I1008 17:58:06.410371  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410376  548894 main.go:141] libmachine: (ha-094095-m02)   </features>
	I1008 17:58:06.410383  548894 main.go:141] libmachine: (ha-094095-m02)   <cpu mode='host-passthrough'>
	I1008 17:58:06.410388  548894 main.go:141] libmachine: (ha-094095-m02)   
	I1008 17:58:06.410392  548894 main.go:141] libmachine: (ha-094095-m02)   </cpu>
	I1008 17:58:06.410397  548894 main.go:141] libmachine: (ha-094095-m02)   <os>
	I1008 17:58:06.410403  548894 main.go:141] libmachine: (ha-094095-m02)     <type>hvm</type>
	I1008 17:58:06.410408  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='cdrom'/>
	I1008 17:58:06.410418  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='hd'/>
	I1008 17:58:06.410430  548894 main.go:141] libmachine: (ha-094095-m02)     <bootmenu enable='no'/>
	I1008 17:58:06.410440  548894 main.go:141] libmachine: (ha-094095-m02)   </os>
	I1008 17:58:06.410448  548894 main.go:141] libmachine: (ha-094095-m02)   <devices>
	I1008 17:58:06.410456  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='cdrom'>
	I1008 17:58:06.410468  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/boot2docker.iso'/>
	I1008 17:58:06.410474  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hdc' bus='scsi'/>
	I1008 17:58:06.410479  548894 main.go:141] libmachine: (ha-094095-m02)       <readonly/>
	I1008 17:58:06.410485  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410515  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='disk'>
	I1008 17:58:06.410542  548894 main.go:141] libmachine: (ha-094095-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:58:06.410557  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk'/>
	I1008 17:58:06.410568  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hda' bus='virtio'/>
	I1008 17:58:06.410582  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410592  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410604  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='mk-ha-094095'/>
	I1008 17:58:06.410613  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410622  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410630  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410642  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='default'/>
	I1008 17:58:06.410661  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410673  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410683  548894 main.go:141] libmachine: (ha-094095-m02)     <serial type='pty'>
	I1008 17:58:06.410692  548894 main.go:141] libmachine: (ha-094095-m02)       <target port='0'/>
	I1008 17:58:06.410700  548894 main.go:141] libmachine: (ha-094095-m02)     </serial>
	I1008 17:58:06.410712  548894 main.go:141] libmachine: (ha-094095-m02)     <console type='pty'>
	I1008 17:58:06.410727  548894 main.go:141] libmachine: (ha-094095-m02)       <target type='serial' port='0'/>
	I1008 17:58:06.410741  548894 main.go:141] libmachine: (ha-094095-m02)     </console>
	I1008 17:58:06.410750  548894 main.go:141] libmachine: (ha-094095-m02)     <rng model='virtio'>
	I1008 17:58:06.410761  548894 main.go:141] libmachine: (ha-094095-m02)       <backend model='random'>/dev/random</backend>
	I1008 17:58:06.410771  548894 main.go:141] libmachine: (ha-094095-m02)     </rng>
	I1008 17:58:06.410780  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410787  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410796  548894 main.go:141] libmachine: (ha-094095-m02)   </devices>
	I1008 17:58:06.410804  548894 main.go:141] libmachine: (ha-094095-m02) </domain>
	I1008 17:58:06.410828  548894 main.go:141] libmachine: (ha-094095-m02) 
	I1008 17:58:06.418030  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:0f:fc:b1 in network default
	I1008 17:58:06.418595  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring networks are active...
	I1008 17:58:06.418616  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:06.419273  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network default is active
	I1008 17:58:06.419679  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network mk-ha-094095 is active
	I1008 17:58:06.420099  548894 main.go:141] libmachine: (ha-094095-m02) Getting domain xml...
	I1008 17:58:06.420774  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:07.625613  548894 main.go:141] libmachine: (ha-094095-m02) Waiting to get IP...
	I1008 17:58:07.626394  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.626834  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.626863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.626812  549282 retry.go:31] will retry after 298.191028ms: waiting for machine to come up
	I1008 17:58:07.926517  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.926935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.926967  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.926892  549282 retry.go:31] will retry after 251.007436ms: waiting for machine to come up
	I1008 17:58:08.179311  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.179723  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.179753  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.179684  549282 retry.go:31] will retry after 369.990509ms: waiting for machine to come up
	I1008 17:58:08.551209  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.551664  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.551688  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.551618  549282 retry.go:31] will retry after 529.446819ms: waiting for machine to come up
	I1008 17:58:09.082289  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.082764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.082787  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.082730  549282 retry.go:31] will retry after 698.772609ms: waiting for machine to come up
	I1008 17:58:09.782428  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.783035  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.783077  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.782975  549282 retry.go:31] will retry after 749.123701ms: waiting for machine to come up
	I1008 17:58:10.533886  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:10.534374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:10.534406  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:10.534314  549282 retry.go:31] will retry after 748.167347ms: waiting for machine to come up
	I1008 17:58:11.284374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:11.284764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:11.284793  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:11.284726  549282 retry.go:31] will retry after 1.314312212s: waiting for machine to come up
	I1008 17:58:12.600256  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:12.600675  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:12.600706  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:12.600619  549282 retry.go:31] will retry after 1.264771643s: waiting for machine to come up
	I1008 17:58:13.867255  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:13.867784  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:13.867816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:13.867728  549282 retry.go:31] will retry after 2.081210662s: waiting for machine to come up
	I1008 17:58:15.950893  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:15.951309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:15.951341  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:15.951258  549282 retry.go:31] will retry after 2.823132453s: waiting for machine to come up
	I1008 17:58:18.778198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:18.778573  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:18.778605  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:18.778535  549282 retry.go:31] will retry after 2.715237967s: waiting for machine to come up
	I1008 17:58:21.495309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:21.495754  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:21.495780  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:21.495712  549282 retry.go:31] will retry after 2.962404474s: waiting for machine to come up
	I1008 17:58:24.461815  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:24.462170  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:24.462198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:24.462131  549282 retry.go:31] will retry after 4.711440731s: waiting for machine to come up
	I1008 17:58:29.176935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177439  548894 main.go:141] libmachine: (ha-094095-m02) Found IP for machine: 192.168.39.65
	I1008 17:58:29.177459  548894 main.go:141] libmachine: (ha-094095-m02) Reserving static IP address...
	I1008 17:58:29.177467  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177881  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find host DHCP lease matching {name: "ha-094095-m02", mac: "52:54:00:28:c9:b2", ip: "192.168.39.65"} in network mk-ha-094095
	I1008 17:58:29.250979  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Getting to WaitForSSH function...
	I1008 17:58:29.251007  548894 main.go:141] libmachine: (ha-094095-m02) Reserved static IP address: 192.168.39.65
	I1008 17:58:29.251020  548894 main.go:141] libmachine: (ha-094095-m02) Waiting for SSH to be available...
	I1008 17:58:29.253304  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253715  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.253745  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253826  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH client type: external
	I1008 17:58:29.253858  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa (-rw-------)
	I1008 17:58:29.253895  548894 main.go:141] libmachine: (ha-094095-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:58:29.253928  548894 main.go:141] libmachine: (ha-094095-m02) DBG | About to run SSH command:
	I1008 17:58:29.253953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | exit 0
	I1008 17:58:29.377997  548894 main.go:141] libmachine: (ha-094095-m02) DBG | SSH cmd err, output: <nil>: 
	I1008 17:58:29.378287  548894 main.go:141] libmachine: (ha-094095-m02) KVM machine creation complete!
	I1008 17:58:29.378621  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:29.379167  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379376  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379500  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:58:29.379514  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 17:58:29.380658  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:58:29.380670  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:58:29.380676  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:58:29.380683  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.382734  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383074  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.383097  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383251  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.383416  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383613  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383753  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.383914  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.384122  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.384133  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:58:29.485427  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.485449  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:58:29.485460  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.488012  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488364  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.488395  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488586  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.488786  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.488953  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.489087  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.489247  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.489514  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.489530  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:58:29.590445  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:58:29.590532  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:58:29.590542  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:58:29.590551  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.590782  548894 buildroot.go:166] provisioning hostname "ha-094095-m02"
	I1008 17:58:29.590806  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.591021  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.593666  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594067  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.594096  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594246  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.594404  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594554  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594724  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.594891  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.595109  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.595125  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m02 && echo "ha-094095-m02" | sudo tee /etc/hostname
	I1008 17:58:29.714147  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m02
	
	I1008 17:58:29.714180  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.716973  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717353  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.717384  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717565  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.717752  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.717913  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.718050  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.718222  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.718416  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.718433  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:58:29.831586  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.831619  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:58:29.831636  548894 buildroot.go:174] setting up certificates
	I1008 17:58:29.831645  548894 provision.go:84] configureAuth start
	I1008 17:58:29.831659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.831944  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:29.834827  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835217  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.835237  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.837816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.838223  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838374  548894 provision.go:143] copyHostCerts
	I1008 17:58:29.838406  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838440  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:58:29.838448  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838513  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:58:29.838598  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838615  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:58:29.838620  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838643  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:58:29.838682  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838698  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:58:29.838704  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838730  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:58:29.838774  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m02 san=[127.0.0.1 192.168.39.65 ha-094095-m02 localhost minikube]
	I1008 17:58:29.938554  548894 provision.go:177] copyRemoteCerts
	I1008 17:58:29.938614  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:58:29.938646  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.941344  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941644  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.941673  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941805  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.942003  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.942163  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.942301  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.024548  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:58:30.024622  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:58:30.049270  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:58:30.049353  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:58:30.073294  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:58:30.073363  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:58:30.097034  548894 provision.go:87] duration metric: took 265.374667ms to configureAuth
	I1008 17:58:30.097066  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:58:30.097258  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:30.097336  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.100086  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100367  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.100397  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100547  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.100709  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.100901  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.101076  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.101293  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.101528  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.101554  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:58:30.316444  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:58:30.316471  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:58:30.316479  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetURL
	I1008 17:58:30.317802  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using libvirt version 6000000
	I1008 17:58:30.320137  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320544  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.320587  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320709  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:58:30.320718  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:58:30.320726  548894 client.go:171] duration metric: took 24.527519698s to LocalClient.Create
	I1008 17:58:30.320756  548894 start.go:167] duration metric: took 24.527598536s to libmachine.API.Create "ha-094095"
	I1008 17:58:30.320770  548894 start.go:293] postStartSetup for "ha-094095-m02" (driver="kvm2")
	I1008 17:58:30.320783  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:58:30.320822  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.321070  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:58:30.321097  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.323268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323601  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.323630  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323770  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.323934  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.324073  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.324173  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.408962  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:58:30.413084  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:58:30.413110  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:58:30.413178  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:58:30.413266  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:58:30.413279  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:58:30.413385  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:58:30.423213  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:30.446502  548894 start.go:296] duration metric: took 125.715217ms for postStartSetup
	I1008 17:58:30.446572  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:30.447199  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.449851  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450235  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.450268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450469  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:30.450701  548894 start.go:128] duration metric: took 24.675682473s to createHost
	I1008 17:58:30.450743  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.453038  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453348  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.453375  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.453697  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.453857  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.454010  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.454159  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.454400  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.454410  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:58:30.559077  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410310.517666608
	
	I1008 17:58:30.559107  548894 fix.go:216] guest clock: 1728410310.517666608
	I1008 17:58:30.559114  548894 fix.go:229] Guest: 2024-10-08 17:58:30.517666608 +0000 UTC Remote: 2024-10-08 17:58:30.45071757 +0000 UTC m=+71.541677784 (delta=66.949038ms)
	I1008 17:58:30.559131  548894 fix.go:200] guest clock delta is within tolerance: 66.949038ms
	I1008 17:58:30.559136  548894 start.go:83] releasing machines lock for "ha-094095-m02", held for 24.78424013s
	I1008 17:58:30.559157  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.559409  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.562379  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.562717  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.562741  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.564989  548894 out.go:177] * Found network options:
	I1008 17:58:30.566270  548894 out.go:177]   - NO_PROXY=192.168.39.99
	W1008 17:58:30.567463  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.567496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568070  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568303  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568423  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:58:30.568473  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	W1008 17:58:30.568503  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.568602  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:58:30.568624  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.570953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571141  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571291  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571315  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571468  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571489  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571498  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571671  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572011  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572054  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.572151  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.807329  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:58:30.813213  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:58:30.813287  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:58:30.829683  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:58:30.829708  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:58:30.829790  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:58:30.845021  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:58:30.858172  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:58:30.858226  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:58:30.871442  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:58:30.884200  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:58:31.001594  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:58:31.145565  548894 docker.go:233] disabling docker service ...
	I1008 17:58:31.145647  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:58:31.159802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:58:31.172545  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:58:31.317614  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:58:31.428085  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:58:31.441474  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:58:31.458921  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:58:31.458992  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.469332  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:58:31.469401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.479553  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.489606  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.499476  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:58:31.509618  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.519561  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.536177  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.546145  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:58:31.555445  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:58:31.555504  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:58:31.568401  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:58:31.577660  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:31.690206  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:58:31.785577  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:58:31.785668  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:58:31.790440  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:58:31.790488  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:58:31.794008  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:58:31.830698  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:58:31.830779  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.860448  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.888491  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:58:31.889686  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:58:31.890999  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:31.893749  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894085  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:31.894111  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894298  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:58:31.898872  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:31.911229  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:58:31.911431  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:31.911784  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.911827  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.926475  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I1008 17:58:31.926940  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.927427  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.927446  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.927739  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.927928  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:31.929331  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:31.929604  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.929636  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.944569  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1008 17:58:31.945071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.945554  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.945577  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.945884  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.946077  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:31.946243  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.65
	I1008 17:58:31.946257  548894 certs.go:194] generating shared ca certs ...
	I1008 17:58:31.946274  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:31.946447  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:58:31.946488  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:58:31.946503  548894 certs.go:256] generating profile certs ...
	I1008 17:58:31.946591  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:58:31.946614  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9
	I1008 17:58:31.946631  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.254]
	I1008 17:58:32.004758  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 ...
	I1008 17:58:32.004782  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9: {Name:mk5f5c650d9dd5d2249fb843b585c028b52aecec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.004936  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 ...
	I1008 17:58:32.004948  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9: {Name:mk72de6dbb470530f019dc623057311deeb636c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.005014  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:58:32.005145  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:58:32.005267  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:58:32.005283  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:58:32.005296  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:58:32.005308  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:58:32.005321  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:58:32.005335  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:58:32.005348  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:58:32.005359  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:58:32.005370  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:58:32.005421  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:58:32.005451  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:58:32.005460  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:58:32.005496  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:58:32.005520  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:58:32.005541  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:58:32.005579  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:32.005605  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.005619  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.005631  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.005665  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:32.008694  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009085  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:32.009115  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009227  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:32.009422  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:32.009576  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:32.009716  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:32.082578  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:58:32.087536  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:58:32.098777  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:58:32.102888  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:58:32.112522  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:58:32.116400  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:58:32.126625  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:58:32.130706  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:58:32.141238  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:58:32.145206  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:58:32.154909  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:58:32.159011  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:58:32.169341  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:58:32.193388  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:58:32.215733  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:58:32.237995  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:58:32.260545  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 17:58:32.283295  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 17:58:32.305577  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:58:32.327963  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:58:32.350081  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:58:32.372344  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:58:32.394280  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:58:32.416064  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:58:32.431348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:58:32.446729  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:58:32.462348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:58:32.479908  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:58:32.495280  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:58:32.510638  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:58:32.526014  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:58:32.531514  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:58:32.541262  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545663  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545708  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.551139  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:58:32.561010  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:58:32.570960  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575030  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575086  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.580417  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:58:32.590088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:58:32.600566  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604834  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604876  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.610374  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:58:32.620430  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:58:32.624404  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:58:32.624460  548894 kubeadm.go:934] updating node {m02 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1008 17:58:32.624566  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:58:32.624597  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:58:32.624632  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:58:32.640207  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:58:32.640276  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:58:32.640318  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.651418  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:58:32.651482  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.660840  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:58:32.660867  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660925  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660955  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1008 17:58:32.660974  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1008 17:58:32.665332  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:58:32.665355  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:58:33.330557  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.330641  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.335582  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:58:33.335623  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:58:33.372522  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:58:33.392996  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.393114  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.400473  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:58:33.400509  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:58:33.862223  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:58:33.873974  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 17:58:33.890552  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:58:33.907049  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:58:33.923719  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:58:33.927643  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:33.940952  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:34.068619  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:34.085108  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:34.085464  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:34.085525  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:34.100590  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1008 17:58:34.101071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:34.101641  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:34.101663  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:34.101990  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:34.102197  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:34.102362  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:58:34.102466  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:58:34.102489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:34.105069  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105405  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:34.105432  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105659  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:34.105846  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:34.106036  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:34.106174  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:34.253303  548894 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:34.253365  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443"
	I1008 17:58:55.647352  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443": (21.393954296s)
	I1008 17:58:55.647399  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 17:58:56.179900  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m02 minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 17:58:56.351414  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 17:58:56.472891  548894 start.go:319] duration metric: took 22.370522266s to joinCluster
	I1008 17:58:56.472999  548894 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:56.473310  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:56.474358  548894 out.go:177] * Verifying Kubernetes components...
	I1008 17:58:56.475511  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:56.748460  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:56.780862  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:56.781184  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 17:58:56.781253  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 17:58:56.781476  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:58:56.781593  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:56.781601  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:56.781608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:56.781612  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:56.791092  548894 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1008 17:58:57.281764  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.281787  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.281795  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.281800  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.293233  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:58:57.782526  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.782566  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.782571  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.786781  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.281871  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.281899  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.281911  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.281917  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.285022  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:58:58.781938  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.781972  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.781983  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.781989  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.786159  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.786795  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:58:59.282562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.282596  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.282609  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.282619  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.286768  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:59.781827  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.781856  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.781867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.781872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.785211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:00.282380  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.282406  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.282417  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.282424  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.285358  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:00.782500  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.782529  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.782538  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.782541  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.785321  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.281680  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.281702  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.281711  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.281717  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.284371  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.285041  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:01.782411  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.782443  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.782453  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.782458  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.785485  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.282181  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.282203  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.282212  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.282217  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.285355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.782528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.782565  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.782571  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.785688  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.282604  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.282627  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.282638  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.282646  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.286199  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.286918  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:03.782407  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.782431  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.782441  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.782447  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.785212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:04.282369  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.282392  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.282400  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.282404  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.285540  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:04.781799  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.781818  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.781831  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.781835  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.785050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.282133  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.282156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.282163  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.282166  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.285211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.782060  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.782079  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.782090  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.782097  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.784932  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:05.785622  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:06.282491  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.282513  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.282521  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.282524  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.285446  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:06.782400  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.782424  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.782433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.782439  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.787263  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:07.282189  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.282221  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.282227  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.282231  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.285027  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:07.781864  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.781885  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.781895  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.781901  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.784237  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:08.281994  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.282014  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.282022  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.282027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.285398  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:08.286042  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:08.782428  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.782454  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.782466  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.782472  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.785709  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.282163  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.282193  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.282204  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.282211  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.285429  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.782392  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.782415  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.782423  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.782427  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.785404  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.282376  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.282398  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.282407  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.282410  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.293860  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:59:10.295059  548894 node_ready.go:49] node "ha-094095-m02" has status "Ready":"True"
	I1008 17:59:10.295090  548894 node_ready.go:38] duration metric: took 13.513574743s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:59:10.295105  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:10.295211  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:10.295228  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.295239  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.295243  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.309090  548894 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1008 17:59:10.317441  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.317556  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 17:59:10.317568  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.317578  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.317586  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.321472  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.322135  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.322156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.322167  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.322174  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.328845  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.329380  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.329405  548894 pod_ready.go:82] duration metric: took 11.930599ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329419  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329498  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 17:59:10.329509  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.329520  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.329528  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.336402  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.337294  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.337313  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.337323  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.337328  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.340848  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.341320  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.341341  548894 pod_ready.go:82] duration metric: took 11.909652ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341354  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341421  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 17:59:10.341432  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.341442  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.341450  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.343586  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.344175  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.344191  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.344198  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.344202  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.346350  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.347112  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.347134  548894 pod_ready.go:82] duration metric: took 5.772495ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347147  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347220  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 17:59:10.347231  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.347241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.347249  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.349293  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.349880  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.349897  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.349916  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.349921  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.352009  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.352470  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.352496  548894 pod_ready.go:82] duration metric: took 5.340167ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.352518  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.482865  548894 request.go:632] Waited for 130.276413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482957  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482968  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.482977  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.482983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.486050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.683204  548894 request.go:632] Waited for 196.383245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683286  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683291  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.683299  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.683302  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.686545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.687112  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.687134  548894 pod_ready.go:82] duration metric: took 334.609013ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.687145  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.882406  548894 request.go:632] Waited for 195.187252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882484  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882489  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.882498  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.882503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.885610  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.082756  548894 request.go:632] Waited for 196.397183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082846  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082857  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.082869  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.082874  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.085950  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.086623  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.086650  548894 pod_ready.go:82] duration metric: took 399.497445ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.086663  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.282438  548894 request.go:632] Waited for 195.669677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282535  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282544  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.282552  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.282557  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.285746  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.482936  548894 request.go:632] Waited for 196.360528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483014  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483021  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.483030  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.483037  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.486267  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.486823  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.486845  548894 pod_ready.go:82] duration metric: took 400.172946ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.486856  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.683063  548894 request.go:632] Waited for 196.099154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683155  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683168  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.683181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.683192  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.686310  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.882490  548894 request.go:632] Waited for 195.281424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882569  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.882580  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.882587  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.885732  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.886206  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.886228  548894 pod_ready.go:82] duration metric: took 399.364956ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.886243  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.083083  548894 request.go:632] Waited for 196.741087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083174  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083181  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.083193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.083199  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.086438  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.282815  548894 request.go:632] Waited for 195.357265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282879  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282884  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.282892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.282897  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.286211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.286955  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.286978  548894 pod_ready.go:82] duration metric: took 400.728245ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.286989  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.483080  548894 request.go:632] Waited for 196.002385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483159  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483167  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.483181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.483193  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.486235  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.683233  548894 request.go:632] Waited for 196.354052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683315  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683322  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.683334  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.683341  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.686419  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.687164  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.687194  548894 pod_ready.go:82] duration metric: took 400.198282ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.687210  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.883073  548894 request.go:632] Waited for 195.753943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883139  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883145  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.883152  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.883156  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.886291  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.083210  548894 request.go:632] Waited for 196.369192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083288  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083296  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.083304  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.083308  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.086479  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.087168  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.087188  548894 pod_ready.go:82] duration metric: took 399.968628ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.087198  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.283359  548894 request.go:632] Waited for 196.068525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283420  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283425  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.283433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.283438  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.286484  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.482457  548894 request.go:632] Waited for 195.25665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482575  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482588  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.482599  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.482605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.485671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.486395  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.486417  548894 pod_ready.go:82] duration metric: took 399.212171ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.486429  548894 pod_ready.go:39] duration metric: took 3.191309926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:13.486448  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 17:59:13.486516  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 17:59:13.501134  548894 api_server.go:72] duration metric: took 17.028092431s to wait for apiserver process to appear ...
	I1008 17:59:13.501165  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 17:59:13.501208  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 17:59:13.505717  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 17:59:13.506345  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 17:59:13.506369  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.506381  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.506389  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.508475  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:13.508579  548894 api_server.go:141] control plane version: v1.31.1
	I1008 17:59:13.508596  548894 api_server.go:131] duration metric: took 7.424538ms to wait for apiserver health ...
	I1008 17:59:13.508606  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 17:59:13.682454  548894 request.go:632] Waited for 173.762668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682527  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682532  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.682541  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.682546  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.687595  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 17:59:13.692646  548894 system_pods.go:59] 17 kube-system pods found
	I1008 17:59:13.692692  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:13.692702  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:13.692707  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:13.692713  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:13.692718  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:13.692723  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:13.692730  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:13.692735  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:13.692744  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:13.692750  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:13.692755  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:13.692760  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:13.692765  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:13.692774  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:13.692778  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:13.692783  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:13.692788  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:13.692796  548894 system_pods.go:74] duration metric: took 184.183414ms to wait for pod list to return data ...
	I1008 17:59:13.692811  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 17:59:13.883264  548894 request.go:632] Waited for 190.350103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883340  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883352  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.883364  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.883373  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.887200  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.887443  548894 default_sa.go:45] found service account: "default"
	I1008 17:59:13.887464  548894 default_sa.go:55] duration metric: took 194.642236ms for default service account to be created ...
	I1008 17:59:13.887473  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 17:59:14.083128  548894 request.go:632] Waited for 195.575348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083197  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083204  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.083215  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.083224  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.087502  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:14.091850  548894 system_pods.go:86] 17 kube-system pods found
	I1008 17:59:14.091874  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:14.091880  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:14.091884  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:14.091888  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:14.091895  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:14.091898  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:14.091903  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:14.091909  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:14.091915  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:14.091921  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:14.091929  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:14.091935  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:14.091943  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:14.091948  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:14.091954  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:14.091958  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:14.091961  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:14.091969  548894 system_pods.go:126] duration metric: took 204.490014ms to wait for k8s-apps to be running ...
	I1008 17:59:14.091978  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 17:59:14.092031  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:14.107751  548894 system_svc.go:56] duration metric: took 15.765669ms WaitForService to wait for kubelet
	I1008 17:59:14.107782  548894 kubeadm.go:582] duration metric: took 17.634744099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:59:14.107804  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 17:59:14.283342  548894 request.go:632] Waited for 175.43028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283397  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283402  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.283410  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.283415  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.286910  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:14.287827  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287854  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287877  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287883  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287892  548894 node_conditions.go:105] duration metric: took 180.082842ms to run NodePressure ...
	I1008 17:59:14.287908  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:59:14.287939  548894 start.go:255] writing updated cluster config ...
	I1008 17:59:14.289665  548894 out.go:201] 
	I1008 17:59:14.290934  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:14.291033  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.292598  548894 out.go:177] * Starting "ha-094095-m03" control-plane node in "ha-094095" cluster
	I1008 17:59:14.293602  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:59:14.293620  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:59:14.293722  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:59:14.293741  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:59:14.293865  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.294036  548894 start.go:360] acquireMachinesLock for ha-094095-m03: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:59:14.294084  548894 start.go:364] duration metric: took 28.442µs to acquireMachinesLock for "ha-094095-m03"
	I1008 17:59:14.294116  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:14.294207  548894 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1008 17:59:14.295495  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:59:14.295567  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:14.295608  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:14.310848  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I1008 17:59:14.311356  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:14.311872  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:14.311899  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:14.312212  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:14.312396  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:14.312674  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:14.312844  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:59:14.312876  548894 client.go:168] LocalClient.Create starting
	I1008 17:59:14.312902  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:59:14.312934  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.312948  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313000  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:59:14.313019  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.313027  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313042  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:59:14.313050  548894 main.go:141] libmachine: (ha-094095-m03) Calling .PreCreateCheck
	I1008 17:59:14.313206  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:14.313583  548894 main.go:141] libmachine: Creating machine...
	I1008 17:59:14.313600  548894 main.go:141] libmachine: (ha-094095-m03) Calling .Create
	I1008 17:59:14.313739  548894 main.go:141] libmachine: (ha-094095-m03) Creating KVM machine...
	I1008 17:59:14.314906  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing default KVM network
	I1008 17:59:14.315074  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing private KVM network mk-ha-094095
	I1008 17:59:14.315221  548894 main.go:141] libmachine: (ha-094095-m03) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.315247  548894 main.go:141] libmachine: (ha-094095-m03) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:59:14.315327  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.315217  549655 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.315388  548894 main.go:141] libmachine: (ha-094095-m03) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:59:14.593209  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.593087  549655 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa...
	I1008 17:59:14.821442  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821329  549655 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk...
	I1008 17:59:14.821476  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing magic tar header
	I1008 17:59:14.821491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing SSH key tar header
	I1008 17:59:14.821502  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821478  549655 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.821659  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03
	I1008 17:59:14.821694  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 (perms=drwx------)
	I1008 17:59:14.821705  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:59:14.821719  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.821729  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:59:14.821740  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:59:14.821750  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:59:14.821762  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:59:14.821772  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home
	I1008 17:59:14.821784  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:59:14.821794  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Skipping /home - not owner
	I1008 17:59:14.821808  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:59:14.821819  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:59:14.821836  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:59:14.821846  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:14.822739  548894 main.go:141] libmachine: (ha-094095-m03) define libvirt domain using xml: 
	I1008 17:59:14.822758  548894 main.go:141] libmachine: (ha-094095-m03) <domain type='kvm'>
	I1008 17:59:14.822767  548894 main.go:141] libmachine: (ha-094095-m03)   <name>ha-094095-m03</name>
	I1008 17:59:14.822774  548894 main.go:141] libmachine: (ha-094095-m03)   <memory unit='MiB'>2200</memory>
	I1008 17:59:14.822782  548894 main.go:141] libmachine: (ha-094095-m03)   <vcpu>2</vcpu>
	I1008 17:59:14.822792  548894 main.go:141] libmachine: (ha-094095-m03)   <features>
	I1008 17:59:14.822799  548894 main.go:141] libmachine: (ha-094095-m03)     <acpi/>
	I1008 17:59:14.822805  548894 main.go:141] libmachine: (ha-094095-m03)     <apic/>
	I1008 17:59:14.822815  548894 main.go:141] libmachine: (ha-094095-m03)     <pae/>
	I1008 17:59:14.822822  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.822827  548894 main.go:141] libmachine: (ha-094095-m03)   </features>
	I1008 17:59:14.822834  548894 main.go:141] libmachine: (ha-094095-m03)   <cpu mode='host-passthrough'>
	I1008 17:59:14.822838  548894 main.go:141] libmachine: (ha-094095-m03)   
	I1008 17:59:14.822842  548894 main.go:141] libmachine: (ha-094095-m03)   </cpu>
	I1008 17:59:14.822847  548894 main.go:141] libmachine: (ha-094095-m03)   <os>
	I1008 17:59:14.822857  548894 main.go:141] libmachine: (ha-094095-m03)     <type>hvm</type>
	I1008 17:59:14.822865  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='cdrom'/>
	I1008 17:59:14.822879  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='hd'/>
	I1008 17:59:14.822888  548894 main.go:141] libmachine: (ha-094095-m03)     <bootmenu enable='no'/>
	I1008 17:59:14.822897  548894 main.go:141] libmachine: (ha-094095-m03)   </os>
	I1008 17:59:14.822903  548894 main.go:141] libmachine: (ha-094095-m03)   <devices>
	I1008 17:59:14.822910  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='cdrom'>
	I1008 17:59:14.822919  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/boot2docker.iso'/>
	I1008 17:59:14.822926  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hdc' bus='scsi'/>
	I1008 17:59:14.822931  548894 main.go:141] libmachine: (ha-094095-m03)       <readonly/>
	I1008 17:59:14.822939  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.822951  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='disk'>
	I1008 17:59:14.822984  548894 main.go:141] libmachine: (ha-094095-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:59:14.822998  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk'/>
	I1008 17:59:14.823004  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hda' bus='virtio'/>
	I1008 17:59:14.823008  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.823012  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823018  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='mk-ha-094095'/>
	I1008 17:59:14.823028  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823037  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823050  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823062  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='default'/>
	I1008 17:59:14.823072  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823080  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823089  548894 main.go:141] libmachine: (ha-094095-m03)     <serial type='pty'>
	I1008 17:59:14.823097  548894 main.go:141] libmachine: (ha-094095-m03)       <target port='0'/>
	I1008 17:59:14.823105  548894 main.go:141] libmachine: (ha-094095-m03)     </serial>
	I1008 17:59:14.823114  548894 main.go:141] libmachine: (ha-094095-m03)     <console type='pty'>
	I1008 17:59:14.823128  548894 main.go:141] libmachine: (ha-094095-m03)       <target type='serial' port='0'/>
	I1008 17:59:14.823139  548894 main.go:141] libmachine: (ha-094095-m03)     </console>
	I1008 17:59:14.823147  548894 main.go:141] libmachine: (ha-094095-m03)     <rng model='virtio'>
	I1008 17:59:14.823159  548894 main.go:141] libmachine: (ha-094095-m03)       <backend model='random'>/dev/random</backend>
	I1008 17:59:14.823166  548894 main.go:141] libmachine: (ha-094095-m03)     </rng>
	I1008 17:59:14.823173  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823181  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823189  548894 main.go:141] libmachine: (ha-094095-m03)   </devices>
	I1008 17:59:14.823202  548894 main.go:141] libmachine: (ha-094095-m03) </domain>
	I1008 17:59:14.823214  548894 main.go:141] libmachine: (ha-094095-m03) 
	I1008 17:59:14.829896  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:d4:34:b1 in network default
	I1008 17:59:14.830619  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:14.830642  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring networks are active...
	I1008 17:59:14.831385  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network default is active
	I1008 17:59:14.831784  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network mk-ha-094095 is active
	I1008 17:59:14.832205  548894 main.go:141] libmachine: (ha-094095-m03) Getting domain xml...
	I1008 17:59:14.832929  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:16.039421  548894 main.go:141] libmachine: (ha-094095-m03) Waiting to get IP...
	I1008 17:59:16.040212  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.040604  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.040627  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.040576  549655 retry.go:31] will retry after 310.617511ms: waiting for machine to come up
	I1008 17:59:16.353098  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.353638  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.353666  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.353600  549655 retry.go:31] will retry after 370.013025ms: waiting for machine to come up
	I1008 17:59:16.725039  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.725471  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.725511  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.725419  549655 retry.go:31] will retry after 335.057817ms: waiting for machine to come up
	I1008 17:59:17.061762  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.062145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.062168  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.062095  549655 retry.go:31] will retry after 553.959397ms: waiting for machine to come up
	I1008 17:59:17.617869  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.618404  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.618431  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.618345  549655 retry.go:31] will retry after 506.335647ms: waiting for machine to come up
	I1008 17:59:18.125977  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.126353  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.126384  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.126291  549655 retry.go:31] will retry after 734.408354ms: waiting for machine to come up
	I1008 17:59:18.862107  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.862605  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.862632  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.862544  549655 retry.go:31] will retry after 1.020122482s: waiting for machine to come up
	I1008 17:59:19.884038  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:19.884492  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:19.884530  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:19.884425  549655 retry.go:31] will retry after 1.125801014s: waiting for machine to come up
	I1008 17:59:21.011532  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:21.011993  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:21.012020  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:21.011944  549655 retry.go:31] will retry after 1.660141079s: waiting for machine to come up
	I1008 17:59:22.673143  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:22.673540  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:22.673570  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:22.673522  549655 retry.go:31] will retry after 1.579793422s: waiting for machine to come up
	I1008 17:59:24.255498  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:24.256062  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:24.256089  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:24.256014  549655 retry.go:31] will retry after 2.586780396s: waiting for machine to come up
	I1008 17:59:26.845780  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:26.846232  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:26.846256  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:26.846181  549655 retry.go:31] will retry after 2.461770006s: waiting for machine to come up
	I1008 17:59:29.309639  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:29.310146  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:29.310176  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:29.310088  549655 retry.go:31] will retry after 4.519355473s: waiting for machine to come up
	I1008 17:59:33.833985  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:33.834361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:33.834386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:33.834293  549655 retry.go:31] will retry after 3.493644498s: waiting for machine to come up
	I1008 17:59:37.331421  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.331914  548894 main.go:141] libmachine: (ha-094095-m03) Found IP for machine: 192.168.39.194
	I1008 17:59:37.331939  548894 main.go:141] libmachine: (ha-094095-m03) Reserving static IP address...
	I1008 17:59:37.331956  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has current primary IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.332395  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find host DHCP lease matching {name: "ha-094095-m03", mac: "52:54:00:e6:8f:e3", ip: "192.168.39.194"} in network mk-ha-094095
	I1008 17:59:37.404136  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Getting to WaitForSSH function...
	I1008 17:59:37.404175  548894 main.go:141] libmachine: (ha-094095-m03) Reserved static IP address: 192.168.39.194
	I1008 17:59:37.404188  548894 main.go:141] libmachine: (ha-094095-m03) Waiting for SSH to be available...
	I1008 17:59:37.406755  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407114  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.407145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407257  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH client type: external
	I1008 17:59:37.407295  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa (-rw-------)
	I1008 17:59:37.407348  548894 main.go:141] libmachine: (ha-094095-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:59:37.407377  548894 main.go:141] libmachine: (ha-094095-m03) DBG | About to run SSH command:
	I1008 17:59:37.407391  548894 main.go:141] libmachine: (ha-094095-m03) DBG | exit 0
	I1008 17:59:37.534234  548894 main.go:141] libmachine: (ha-094095-m03) DBG | SSH cmd err, output: <nil>: 
	I1008 17:59:37.534542  548894 main.go:141] libmachine: (ha-094095-m03) KVM machine creation complete!
	I1008 17:59:37.535062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:37.535615  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.535835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.536043  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:59:37.536062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetState
	I1008 17:59:37.537459  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:59:37.537477  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:59:37.537484  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:59:37.537492  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.539962  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540458  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.540491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540661  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.540847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.540985  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.541188  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.541386  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.541674  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.541690  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:59:37.649416  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:37.649443  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:59:37.649452  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.652360  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652754  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.652783  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652904  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.653099  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653253  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653372  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.653521  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.653691  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.653700  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:59:37.763719  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:59:37.763801  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:59:37.763820  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:59:37.763835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764121  548894 buildroot.go:166] provisioning hostname "ha-094095-m03"
	I1008 17:59:37.764156  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764347  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.766798  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.767194  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.767617  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767784  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767982  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.768161  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.768362  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.768381  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m03 && echo "ha-094095-m03" | sudo tee /etc/hostname
	I1008 17:59:37.892598  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m03
	
	I1008 17:59:37.892638  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.895717  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896104  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.896139  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896357  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.896582  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896764  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896930  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.897130  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.897346  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.897371  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:59:38.015892  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:38.015942  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:59:38.015964  548894 buildroot.go:174] setting up certificates
	I1008 17:59:38.015976  548894 provision.go:84] configureAuth start
	I1008 17:59:38.015994  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:38.016285  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.018925  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019329  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.019361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019480  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.021681  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022085  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.022109  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022295  548894 provision.go:143] copyHostCerts
	I1008 17:59:38.022355  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022398  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:59:38.022410  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022497  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:59:38.022612  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022639  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:59:38.022646  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022684  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:59:38.022749  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022772  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:59:38.022780  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022817  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:59:38.022905  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m03 san=[127.0.0.1 192.168.39.194 ha-094095-m03 localhost minikube]
	I1008 17:59:38.409825  548894 provision.go:177] copyRemoteCerts
	I1008 17:59:38.409880  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:59:38.409906  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.412474  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.412819  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.412850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.413057  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.413233  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.413436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.413614  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.500707  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:59:38.500793  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:59:38.526942  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:59:38.527009  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:59:38.552205  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:59:38.552273  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 17:59:38.575397  548894 provision.go:87] duration metric: took 559.401387ms to configureAuth
	I1008 17:59:38.575426  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:59:38.575799  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:38.575895  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.579241  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579746  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.579778  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579962  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.580162  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580375  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580557  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.580756  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.580976  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.581001  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:59:38.814916  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:59:38.814943  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:59:38.814951  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetURL
	I1008 17:59:38.816195  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using libvirt version 6000000
	I1008 17:59:38.818782  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.819181  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819313  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:59:38.819324  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:59:38.819331  548894 client.go:171] duration metric: took 24.506447945s to LocalClient.Create
	I1008 17:59:38.819354  548894 start.go:167] duration metric: took 24.506513664s to libmachine.API.Create "ha-094095"
	I1008 17:59:38.819366  548894 start.go:293] postStartSetup for "ha-094095-m03" (driver="kvm2")
	I1008 17:59:38.819379  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:59:38.819402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:38.819667  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:59:38.819695  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.822386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.822850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.822878  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.823079  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.823255  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.823425  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.823576  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.911016  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:59:38.915516  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:59:38.915544  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:59:38.915616  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:59:38.915703  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:59:38.915717  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:59:38.915843  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:59:38.927016  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:38.951613  548894 start.go:296] duration metric: took 132.232716ms for postStartSetup
	I1008 17:59:38.951663  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:38.952254  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.954773  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955177  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.955206  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955479  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:38.955726  548894 start.go:128] duration metric: took 24.661507137s to createHost
	I1008 17:59:38.955754  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.957824  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958152  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.958180  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958260  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.958436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958614  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958783  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.958982  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.959149  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.959198  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:59:39.066802  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410379.042145365
	
	I1008 17:59:39.066831  548894 fix.go:216] guest clock: 1728410379.042145365
	I1008 17:59:39.066838  548894 fix.go:229] Guest: 2024-10-08 17:59:39.042145365 +0000 UTC Remote: 2024-10-08 17:59:38.955741605 +0000 UTC m=+140.046701810 (delta=86.40376ms)
	I1008 17:59:39.066854  548894 fix.go:200] guest clock delta is within tolerance: 86.40376ms
	I1008 17:59:39.066859  548894 start.go:83] releasing machines lock for "ha-094095-m03", held for 24.772764688s
	I1008 17:59:39.066879  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.067121  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:39.069711  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.070086  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.070113  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.072386  548894 out.go:177] * Found network options:
	I1008 17:59:39.073842  548894 out.go:177]   - NO_PROXY=192.168.39.99,192.168.39.65
	W1008 17:59:39.075265  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.075288  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.075301  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.075811  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076009  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076099  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:59:39.076150  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	W1008 17:59:39.076202  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.076228  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.076306  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:59:39.076328  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:39.078554  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.078807  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079018  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079043  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079229  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079324  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079350  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079420  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.079542  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079593  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.079786  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.079847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.080000  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.080138  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.318698  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:59:39.324927  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:59:39.324990  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:59:39.343637  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:59:39.343660  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:59:39.343717  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:59:39.360309  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:59:39.373825  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:59:39.373881  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:59:39.387260  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:59:39.400202  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:59:39.520831  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:59:39.680675  548894 docker.go:233] disabling docker service ...
	I1008 17:59:39.680761  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:59:39.695394  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:59:39.710367  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:59:39.839252  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:59:39.972794  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:59:39.988321  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:59:40.006947  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:59:40.007031  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.018072  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:59:40.018137  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.029758  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.040612  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.051467  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:59:40.062960  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.074528  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.091933  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.101742  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:59:40.111189  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:59:40.111232  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:59:40.123431  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:59:40.132781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:40.256434  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:59:40.349829  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:59:40.349903  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:59:40.354785  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:59:40.354842  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:59:40.358519  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:59:40.397714  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:59:40.397812  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.425086  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.452883  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:59:40.454244  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:59:40.455477  548894 out.go:177]   - env NO_PROXY=192.168.39.99,192.168.39.65
	I1008 17:59:40.456757  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:40.459422  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.459818  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:40.459840  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.460096  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:59:40.464498  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:40.479877  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:59:40.480107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:40.480402  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.480441  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.495933  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I1008 17:59:40.496453  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.496925  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.496949  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.497271  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.497471  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:59:40.499057  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:40.499430  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.499465  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.513547  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I1008 17:59:40.514005  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.514450  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.514473  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.514842  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.515015  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:40.515189  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.194
	I1008 17:59:40.515202  548894 certs.go:194] generating shared ca certs ...
	I1008 17:59:40.515221  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.515367  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:59:40.515423  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:59:40.515435  548894 certs.go:256] generating profile certs ...
	I1008 17:59:40.515545  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:59:40.515578  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d
	I1008 17:59:40.515597  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 17:59:40.734889  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d ...
	I1008 17:59:40.734923  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d: {Name:mkaac2d16400496ba6ef1c81a4206e8cf0480e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735091  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d ...
	I1008 17:59:40.735104  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d: {Name:mk3a55a29959b59f407eb97877f8ee016f652037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735177  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:59:40.735309  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:59:40.735433  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:59:40.735451  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:59:40.735464  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:59:40.735479  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:59:40.735491  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:59:40.735503  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:59:40.735514  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:59:40.735528  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:59:40.750415  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:59:40.750523  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:59:40.750564  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:59:40.750576  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:59:40.750597  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:59:40.750620  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:59:40.750642  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:59:40.750679  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:40.750709  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:40.750727  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:59:40.750739  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:59:40.750776  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:40.754187  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754657  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:40.754682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754891  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:40.755083  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:40.755214  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:40.755357  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:40.826678  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:59:40.831630  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:59:40.843594  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:59:40.848493  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:59:40.859904  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:59:40.864097  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:59:40.874362  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:59:40.878501  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:59:40.890535  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:59:40.895442  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:59:40.907886  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:59:40.911759  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:59:40.921878  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:59:40.947644  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:59:40.970914  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:59:40.993912  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:59:41.017348  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1008 17:59:41.040662  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:59:41.063411  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:59:41.086440  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:59:41.109681  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:59:41.132484  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:59:41.156226  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:59:41.178867  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:59:41.195488  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:59:41.212613  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:59:41.228807  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:59:41.246244  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:59:41.262224  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:59:41.277985  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:59:41.294525  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:59:41.300038  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:59:41.311084  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315442  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315488  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.321163  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:59:41.332088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:59:41.342926  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347780  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347833  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.353198  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:59:41.363300  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:59:41.373282  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377636  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377682  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.383451  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:59:41.393738  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:59:41.397604  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:59:41.397660  548894 kubeadm.go:934] updating node {m03 192.168.39.194 8443 v1.31.1 crio true true} ...
	I1008 17:59:41.397755  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:59:41.397799  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:59:41.397831  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:59:41.412820  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:59:41.412901  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:59:41.412955  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.422366  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:59:41.422410  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.431355  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:59:41.431384  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431397  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1008 17:59:41.431416  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431363  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431494  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:41.446391  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.446418  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:59:41.446444  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:59:41.446446  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:59:41.446463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:59:41.447018  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.480884  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:59:41.480970  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:59:42.313012  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:59:42.322438  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1008 17:59:42.338702  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:59:42.365144  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:59:42.382514  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:59:42.386113  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:42.397995  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:42.523088  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:59:42.540754  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:42.541257  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:42.541326  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:42.559172  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I1008 17:59:42.559678  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:42.560333  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:42.560360  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:42.560754  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:42.560977  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:42.561148  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:59:42.561320  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:59:42.561345  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:42.564781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565346  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:42.565377  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565645  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:42.565831  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:42.566030  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:42.566199  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:42.729842  548894 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:42.729907  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443"
	I1008 18:00:04.832594  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443": (22.102635583s)
	I1008 18:00:04.832637  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 18:00:05.279641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m03 minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 18:00:05.406989  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 18:00:05.528741  548894 start.go:319] duration metric: took 22.967581062s to joinCluster
	I1008 18:00:05.528848  548894 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:00:05.529236  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:00:05.530083  548894 out.go:177] * Verifying Kubernetes components...
	I1008 18:00:05.531162  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:00:05.714521  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:00:05.729813  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:00:05.730150  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 18:00:05.730231  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 18:00:05.730539  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:05.730633  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:05.730651  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:05.730664  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:05.730673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:05.734671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.231617  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.231641  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.231650  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.231655  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.234903  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.731584  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.731606  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.731615  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.731620  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.735426  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.231620  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.231630  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.231634  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.235355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.730822  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.730855  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.730867  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.730873  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.735340  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:07.736449  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:08.230853  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.230878  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.230887  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.230892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.234386  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:08.731681  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.731712  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.731722  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.731727  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.735243  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.231587  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.231609  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.231618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.231623  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.235294  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.731675  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.731700  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.731709  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.731713  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.735299  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.231249  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.231335  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.231353  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.231359  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.234866  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.235558  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:10.731835  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.731862  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.731876  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.731881  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.735185  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.231623  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.231632  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.231636  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.235238  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.731791  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.731826  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.731839  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.731845  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.735179  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.231312  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.231339  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.231350  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.231356  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.234779  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.235754  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:12.731629  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.731658  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.731669  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.731673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.735274  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.231468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.231492  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.231500  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.231503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.234905  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.731604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.731613  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.731618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.734788  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.231250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.231274  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.231282  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.231287  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.234694  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.731084  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.731109  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.731117  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.731121  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.735096  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.735874  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:15.231041  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.231070  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.231079  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.231083  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.234482  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:15.731250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.731276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.731288  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.731296  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.734547  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.230897  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.230919  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.230928  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.230937  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.234261  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.731599  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.731608  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.731612  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.735249  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.736046  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:17.231278  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.231302  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.231311  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.231316  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.234212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:17.731562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.731585  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.731594  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.731597  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.735391  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.231528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.231552  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.231561  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.231565  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.234777  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.731570  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.731593  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.731601  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.731608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.735359  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.736085  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:19.231579  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.231604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.231618  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.231622  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.234902  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:19.731112  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.731142  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.731155  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.731162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.734221  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.231563  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.231591  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.231600  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.231605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.234855  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.731738  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.731773  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.731785  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.731792  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.735486  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.231659  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.231685  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.231696  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.231705  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.234967  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.235427  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:21.730803  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.730829  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.730838  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.730843  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.734021  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.231586  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.231613  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.231624  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.231630  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.234981  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.731022  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.731056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.731064  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.731070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.734252  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.231192  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.231215  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.231223  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.231228  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.234975  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.235794  548894 node_ready.go:49] node "ha-094095-m03" has status "Ready":"True"
	I1008 18:00:23.235816  548894 node_ready.go:38] duration metric: took 17.50525839s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:23.235826  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:23.235893  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:23.235903  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.235914  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.235918  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.241231  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:23.248355  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.248435  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 18:00:23.248444  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.248452  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.248456  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.250946  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.251489  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.251502  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.251510  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.251515  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.253741  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.254169  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.254188  548894 pod_ready.go:82] duration metric: took 5.808287ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254199  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254280  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 18:00:23.254291  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.254300  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.254309  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.256714  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.257261  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.257276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.257283  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.257286  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.259498  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.260042  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.260061  548894 pod_ready.go:82] duration metric: took 5.850763ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260072  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 18:00:23.260143  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.260153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.260162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.262300  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.262973  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.262989  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.262999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.263005  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.265000  548894 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1008 18:00:23.265522  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.265544  548894 pod_ready.go:82] duration metric: took 5.464426ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265555  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265622  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 18:00:23.265634  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.265643  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.265648  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.267966  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.268468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:23.268479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.268486  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.268491  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.270736  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.271272  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.271290  548894 pod_ready.go:82] duration metric: took 5.727216ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.271300  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.431729  548894 request.go:632] Waited for 160.342792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431825  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431837  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.431850  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.431861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.438271  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:23.631298  548894 request.go:632] Waited for 192.164013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631383  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631391  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.631408  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.631433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.635040  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.635580  548894 pod_ready.go:93] pod "etcd-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.635599  548894 pod_ready.go:82] duration metric: took 364.291447ms for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.635618  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.831837  548894 request.go:632] Waited for 196.121278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831896  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831902  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.831909  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.831913  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.834801  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.031893  548894 request.go:632] Waited for 196.106655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031976  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031981  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.031989  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.031993  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.035406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.036144  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.036163  548894 pod_ready.go:82] duration metric: took 400.535944ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.036173  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.232096  548894 request.go:632] Waited for 195.798323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232173  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232180  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.232192  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.232201  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.235054  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.432054  548894 request.go:632] Waited for 196.298402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432116  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432121  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.432128  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.432132  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.435456  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.436205  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.436233  548894 pod_ready.go:82] duration metric: took 400.05192ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.436253  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.631271  548894 request.go:632] Waited for 194.926969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631366  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631374  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.631384  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.631390  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.635001  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.831928  548894 request.go:632] Waited for 195.938579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832009  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832015  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.832023  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.832027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.834879  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.835519  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.835541  548894 pod_ready.go:82] duration metric: took 399.279605ms for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.835556  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.031600  548894 request.go:632] Waited for 195.955469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031671  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031676  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.031684  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.031689  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.035187  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.231262  548894 request.go:632] Waited for 195.293412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231320  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231326  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.231339  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.231343  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.234515  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.235363  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.235391  548894 pod_ready.go:82] duration metric: took 399.824349ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.235422  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.431278  548894 request.go:632] Waited for 195.760337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431347  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431353  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.431375  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.431379  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.434406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.631990  548894 request.go:632] Waited for 196.659604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632053  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632058  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.632067  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.632070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.635545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.636227  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.636248  548894 pod_ready.go:82] duration metric: took 400.813116ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.636259  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.831790  548894 request.go:632] Waited for 195.428011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831873  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831885  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.831896  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.831903  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.835520  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.031847  548894 request.go:632] Waited for 195.394713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031926  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031931  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.031939  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.031943  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.034885  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:26.035588  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.035611  548894 pod_ready.go:82] duration metric: took 399.345696ms for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.035622  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.231657  548894 request.go:632] Waited for 195.935325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231715  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231720  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.231728  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.231732  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.234989  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.432143  548894 request.go:632] Waited for 196.401893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432242  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432253  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.432262  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.432270  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.435436  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.436096  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.436113  548894 pod_ready.go:82] duration metric: took 400.484447ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.436124  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.632222  548894 request.go:632] Waited for 196.022184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632309  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632317  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.632325  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.632332  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.636157  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.831362  548894 request.go:632] Waited for 194.278962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831419  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831424  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.831433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.831445  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.834670  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.835262  548894 pod_ready.go:93] pod "kube-proxy-krxss" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.835280  548894 pod_ready.go:82] duration metric: took 399.149562ms for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.835292  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.031407  548894 request.go:632] Waited for 196.014244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031471  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.031490  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.031499  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.034651  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.231683  548894 request.go:632] Waited for 196.28215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231743  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231750  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.231761  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.231766  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.234677  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:27.235361  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.235391  548894 pod_ready.go:82] duration metric: took 400.091229ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.235405  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.431237  548894 request.go:632] Waited for 195.72193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431329  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431337  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.431353  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.431360  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.434428  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.631604  548894 request.go:632] Waited for 196.391274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631664  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631669  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.631678  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.631683  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.635129  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.635990  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.636017  548894 pod_ready.go:82] duration metric: took 400.603779ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.636029  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.832057  548894 request.go:632] Waited for 195.932393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832129  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832137  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.832147  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.832152  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.835638  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.031786  548894 request.go:632] Waited for 195.242001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031845  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031850  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.031857  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.031861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.035281  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.035945  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.035968  548894 pod_ready.go:82] duration metric: took 399.926983ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.035978  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.232045  548894 request.go:632] Waited for 195.987112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232140  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.232148  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.232153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.235683  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.431773  548894 request.go:632] Waited for 195.354282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431855  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431860  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.431867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.431872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.435214  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.435815  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.435951  548894 pod_ready.go:82] duration metric: took 399.956305ms for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.435993  548894 pod_ready.go:39] duration metric: took 5.200153143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:28.436017  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:00:28.436094  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:00:28.452375  548894 api_server.go:72] duration metric: took 22.923490341s to wait for apiserver process to appear ...
	I1008 18:00:28.452398  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:00:28.452421  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 18:00:28.456918  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 18:00:28.456978  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 18:00:28.456986  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.456994  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.456999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.457742  548894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1008 18:00:28.457798  548894 api_server.go:141] control plane version: v1.31.1
	I1008 18:00:28.457809  548894 api_server.go:131] duration metric: took 5.40508ms to wait for apiserver health ...
	I1008 18:00:28.457822  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:00:28.632286  548894 request.go:632] Waited for 174.373411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632364  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632372  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.632382  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.632388  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.638836  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:28.647332  548894 system_pods.go:59] 24 kube-system pods found
	I1008 18:00:28.647367  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:28.647374  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:28.647379  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:28.647384  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:28.647389  548894 system_pods.go:61] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:28.647394  548894 system_pods.go:61] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:28.647399  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:28.647404  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:28.647409  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:28.647417  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:28.647426  548894 system_pods.go:61] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:28.647432  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:28.647439  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:28.647445  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:28.647451  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:28.647456  548894 system_pods.go:61] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:28.647463  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:28.647468  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:28.647476  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:28.647482  548894 system_pods.go:61] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:28.647489  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:28.647494  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:28.647499  548894 system_pods.go:61] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:28.647505  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:28.647514  548894 system_pods.go:74] duration metric: took 189.683627ms to wait for pod list to return data ...
	I1008 18:00:28.647529  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:00:28.831958  548894 request.go:632] Waited for 184.329764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832044  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.832067  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.832073  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.837077  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:28.837234  548894 default_sa.go:45] found service account: "default"
	I1008 18:00:28.837253  548894 default_sa.go:55] duration metric: took 189.716305ms for default service account to be created ...
	I1008 18:00:28.837265  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:00:29.031904  548894 request.go:632] Waited for 194.536031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031965  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031970  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.031979  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.031983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.037622  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:29.044999  548894 system_pods.go:86] 24 kube-system pods found
	I1008 18:00:29.045026  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:29.045032  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:29.045036  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:29.045039  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:29.045043  548894 system_pods.go:89] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:29.045046  548894 system_pods.go:89] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:29.045050  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:29.045053  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:29.045056  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:29.045059  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:29.045063  548894 system_pods.go:89] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:29.045066  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:29.045070  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:29.045076  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:29.045082  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:29.045086  548894 system_pods.go:89] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:29.045089  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:29.045093  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:29.045098  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:29.045104  548894 system_pods.go:89] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:29.045107  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:29.045111  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:29.045114  548894 system_pods.go:89] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:29.045117  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:29.045124  548894 system_pods.go:126] duration metric: took 207.850736ms to wait for k8s-apps to be running ...
	I1008 18:00:29.045133  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:00:29.045176  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:00:29.059678  548894 system_svc.go:56] duration metric: took 14.536958ms WaitForService to wait for kubelet
	I1008 18:00:29.059706  548894 kubeadm.go:582] duration metric: took 23.530822988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:00:29.059724  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:00:29.231880  548894 request.go:632] Waited for 172.048672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231961  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231966  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.231974  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.231981  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.238241  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:29.239300  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239332  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239347  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239353  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239361  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239366  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239371  548894 node_conditions.go:105] duration metric: took 179.642781ms to run NodePressure ...
	I1008 18:00:29.239392  548894 start.go:241] waiting for startup goroutines ...
	I1008 18:00:29.239417  548894 start.go:255] writing updated cluster config ...
	I1008 18:00:29.239708  548894 ssh_runner.go:195] Run: rm -f paused
	I1008 18:00:29.291443  548894 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:00:29.293244  548894 out.go:177] * Done! kubectl is now configured to use "ha-094095" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.564540279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=207a8b0d-cebb-4b0c-8a9d-75e9da9defd0 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.565619423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc521bb5-8a00-4b50-b776-7c9d3621fbf6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.566017435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410648565997750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc521bb5-8a00-4b50-b776-7c9d3621fbf6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.566909455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb88fdc0-7591-4321-b601-7cd9462fe6c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.566975579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb88fdc0-7591-4321-b601-7cd9462fe6c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.567263690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb88fdc0-7591-4321-b601-7cd9462fe6c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.606652238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbbe7c22-bcac-4266-aa8f-0c79091e1433 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.606719585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbbe7c22-bcac-4266-aa8f-0c79091e1433 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.607992202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e965aac0-5f87-45e4-88f9-7a66aafd04f8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.608521903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410648608497221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e965aac0-5f87-45e4-88f9-7a66aafd04f8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.609080543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a9ae56d-768b-4bcf-ad62-61cc9f45e565 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.609158173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a9ae56d-768b-4bcf-ad62-61cc9f45e565 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.609467135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a9ae56d-768b-4bcf-ad62-61cc9f45e565 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.620275762Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a2a1715b-54e6-4257-97d0-d53bca7a4346 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.620663508Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-n779r,Uid:d3a10d4a-6add-4642-961b-b7b00f9e363b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410431779985652,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T18:00:30.266893198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6c7xl,Uid:5be15582-d4c7-4ec3-95db-7f9b7db4280d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728410297358103747,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T17:58:17.031751608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghz9x,Uid:a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410297357205428,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-10-08T17:58:17.036351692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54520f81-08fe-4612-bef9-1fe0016c45ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410297355597197,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-08T17:58:17.037337141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&PodSandboxMetadata{Name:kube-proxy-gnmch,Uid:2e4ec0ad-049b-48e6-90b2-8b8430d821f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410284807011649,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-08T17:58:03.897237361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&PodSandboxMetadata{Name:kindnet-mclfx,Uid:fca2ce96-9193-48a5-9dc7-9d20bde6787f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410284802925523,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T17:58:03.882142734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-094095,Uid:4ab63a85f4abc9ded81a3460d92ef212,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728410273569368635,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.99:8443,kubernetes.io/config.hash: 4ab63a85f4abc9ded81a3460d92ef212,kubernetes.io/config.seen: 2024-10-08T17:57:53.083050125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-094095,Uid:19b7e8dee4daa510f3f23034617cd71c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273552850399,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4da
a510f3f23034617cd71c,},Annotations:map[string]string{kubernetes.io/config.hash: 19b7e8dee4daa510f3f23034617cd71c,kubernetes.io/config.seen: 2024-10-08T17:57:53.083055839Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&PodSandboxMetadata{Name:etcd-ha-094095,Uid:22ef4792d58f06f8319e0939993449f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273547684723,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.99:2379,kubernetes.io/config.hash: 22ef4792d58f06f8319e0939993449f9,kubernetes.io/config.seen: 2024-10-08T17:57:53.083056812Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f021979b9e57f9b85a8710
325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-094095,Uid:2762c7155c0d46d981fd81220017a92c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273536917657,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2762c7155c0d46d981fd81220017a92c,kubernetes.io/config.seen: 2024-10-08T17:57:53.083054587Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-094095,Uid:87f977c77bded84c5cd8640a7d7c6034,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273535142157,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.con
tainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87f977c77bded84c5cd8640a7d7c6034,kubernetes.io/config.seen: 2024-10-08T17:57:53.083053476Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a2a1715b-54e6-4257-97d0-d53bca7a4346 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.621645640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=242cd463-8c56-42b7-8249-709d8054e7d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.621735900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=242cd463-8c56-42b7-8249-709d8054e7d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.622047919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=242cd463-8c56-42b7-8249-709d8054e7d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.659732941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3f1553c-3c15-45fc-98df-2b9c59d16cd8 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.659802072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3f1553c-3c15-45fc-98df-2b9c59d16cd8 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.660675415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6573909-d9b3-4579-aeba-192aafc22cdd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.661129450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410648661099669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6573909-d9b3-4579-aeba-192aafc22cdd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.661953486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=762722f4-224c-4aef-96e7-14cdf7de5bb8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.662005932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=762722f4-224c-4aef-96e7-14cdf7de5bb8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:08 ha-094095 crio[659]: time="2024-10-08 18:04:08.662259025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=762722f4-224c-4aef-96e7-14cdf7de5bb8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4f194cdf306a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   eaf6acce4786e       busybox-7dff88458-n779r
	079e7a8fee78f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   875cfacbeeb23       coredns-7c65d6cfc9-6c7xl
	1eb4935d542c2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   9d8f70dc17585       coredns-7c65d6cfc9-ghz9x
	dfdfc8735b822       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   d884b794bcbf8       storage-provisioner
	17a4523dfe3c8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   c791fa497b85a       kindnet-mclfx
	347854044c294       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   29ed3e17d1aab       kube-proxy-gnmch
	8f117035b9a9a       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   13853a6e388f1       kube-vip-ha-094095
	9c418725a44b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b68b365f16def       etcd-ha-094095
	3b8241e00230e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c13c52688447       kube-apiserver-ha-094095
	0224d96e8ab1a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f021979b9e57f       kube-scheduler-ha-094095
	ec97e876ef66b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a2f40f00bb5ff       kube-controller-manager-ha-094095
	
	
	==> coredns [079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee] <==
	[INFO] 10.244.1.2:46939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173909s
	[INFO] 10.244.1.2:43197 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152065s
	[INFO] 10.244.0.4:54276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776636s
	[INFO] 10.244.0.4:42844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001027134s
	[INFO] 10.244.0.4:33552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087486s
	[INFO] 10.244.0.4:40894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128456s
	[INFO] 10.244.2.2:37156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090694s
	[INFO] 10.244.2.2:35975 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000342501s
	[INFO] 10.244.2.2:56819 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008022s
	[INFO] 10.244.2.2:40613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107574s
	[INFO] 10.244.1.2:38959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208641s
	[INFO] 10.244.0.4:58386 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011149s
	[INFO] 10.244.0.4:56827 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016311s
	[INFO] 10.244.0.4:52547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068216s
	[INFO] 10.244.0.4:59149 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077593s
	[INFO] 10.244.2.2:49444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156535s
	[INFO] 10.244.2.2:51787 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111699s
	[INFO] 10.244.2.2:52768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107964s
	[INFO] 10.244.2.2:53538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071551s
	[INFO] 10.244.1.2:52231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220976s
	[INFO] 10.244.0.4:45893 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145642s
	[INFO] 10.244.0.4:50564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012308s
	[INFO] 10.244.0.4:40912 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110407s
	[INFO] 10.244.2.2:48559 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182361s
	[INFO] 10.244.2.2:42189 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123843s
	
	
	==> coredns [1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02] <==
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000403051s
	[INFO] 10.244.2.2:33432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198542s
	[INFO] 10.244.2.2:43175 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00011602s
	[INFO] 10.244.2.2:39986 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00007233s
	[INFO] 10.244.2.2:43098 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001798194s
	[INFO] 10.244.1.2:51904 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006238586s
	[INFO] 10.244.1.2:39841 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245332s
	[INFO] 10.244.1.2:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010411466s
	[INFO] 10.244.0.4:36134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131817s
	[INFO] 10.244.0.4:60392 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136485s
	[INFO] 10.244.0.4:47750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001276s
	[INFO] 10.244.0.4:53066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112589s
	[INFO] 10.244.2.2:50951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171312s
	[INFO] 10.244.2.2:36151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001719697s
	[INFO] 10.244.2.2:59876 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00134295s
	[INFO] 10.244.2.2:34156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121408s
	[INFO] 10.244.1.2:40835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210172s
	[INFO] 10.244.1.2:35561 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210453s
	[INFO] 10.244.1.2:58285 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:57787 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236305s
	[INFO] 10.244.1.2:52947 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185701s
	[INFO] 10.244.1.2:38121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000200581s
	[INFO] 10.244.0.4:37934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195898s
	[INFO] 10.244.2.2:51605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210836s
	[INFO] 10.244.2.2:44666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117181s
	
	
	==> describe nodes <==
	Name:               ha-094095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:57:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-094095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f253fb8c294514826ad247cbfc784d
	  System UUID:                14f253fb-8c29-4514-826a-d247cbfc784d
	  Boot ID:                    6cdd0146-42c4-4814-93e6-3af5699e77ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-n779r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-6c7xl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m4s
	  kube-system                 coredns-7c65d6cfc9-ghz9x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m4s
	  kube-system                 etcd-ha-094095                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m8s
	  kube-system                 kindnet-mclfx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m5s
	  kube-system                 kube-apiserver-ha-094095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-094095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-gnmch                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-scheduler-ha-094095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-094095                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m3s   kube-proxy       
	  Normal  Starting                 6m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s   kubelet          Node ha-094095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s   kubelet          Node ha-094095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s   kubelet          Node ha-094095 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  NodeReady                5m52s  kubelet          Node ha-094095 status is now: NodeReady
	  Normal  RegisteredNode           5m6s   node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	
	
	Name:               ha-094095-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:01:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-094095-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6846904a528149b4bec4ab05607145f5
	  System UUID:                6846904a-5281-49b4-bec4-ab05607145f5
	  Boot ID:                    92a2dec0-2bc9-44db-94e9-e4a68690b144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxdk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-094095-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m13s
	  kube-system                 kindnet-f5x42                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m15s
	  kube-system                 kube-apiserver-ha-094095-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-ha-094095-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-r55hk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-scheduler-ha-094095-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-vip-ha-094095-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-094095-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-094095-m02 status is now: NodeNotReady
	
	
	Name:               ha-094095-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    ha-094095-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cca5410c10d94705a0a750a2a36dfcf7
	  System UUID:                cca5410c-10d9-4705-a0a7-50a2a36dfcf7
	  Boot ID:                    a52600ea-f5af-4184-95ce-18bc5a4ff10e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rxwcg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-094095-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m5s
	  kube-system                 kindnet-8v7s4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-apiserver-ha-094095-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-ha-094095-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-krxss                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-ha-094095-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-vip-ha-094095-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-094095-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	
	
	Name:               ha-094095-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_01_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:01:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-094095-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6fe409be99242ac858632e59843d080
	  System UUID:                c6fe409b-e992-42ac-8586-32e59843d080
	  Boot ID:                    10df0150-6a8d-4d3e-8551-af1fe0638414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jhqlp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-jjgsh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  RegisteredNode           3m               node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-094095-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-094095-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 17:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050015] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039380] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.822235] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417178] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.589695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.867596] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.064259] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063997] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.185531] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.116355] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.250177] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.801506] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.578485] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.057293] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117363] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.084526] kauditd_printk_skb: 79 callbacks suppressed
	[Oct 8 17:58] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.243247] kauditd_printk_skb: 28 callbacks suppressed
	[ +42.891327] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7] <==
	{"level":"warn","ts":"2024-10-08T18:04:08.671060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.719542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.784471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.819202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.874007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.880666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.913500Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.919670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.922189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.926358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.939217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.970915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.977713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.984525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.989045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:08.993125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.001335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.007709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.013551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.016753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.018790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.019924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.024110Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.029968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:09.041026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:04:09 up 6 min,  0 users,  load average: 0.46, 0.40, 0.20
	Linux ha-094095 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a] <==
	I1008 18:03:36.521090       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:03:46.529525       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:03:46.529570       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:03:46.529732       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:03:46.529757       1 main.go:299] handling current node
	I1008 18:03:46.529773       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:03:46.529798       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:03:46.529860       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:03:46.529884       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:03:56.530637       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:03:56.530728       1 main.go:299] handling current node
	I1008 18:03:56.530780       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:03:56.530799       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:03:56.530947       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:03:56.530969       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:03:56.531022       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:03:56.531040       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:06.521023       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:06.521156       1 main.go:299] handling current node
	I1008 18:04:06.521246       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:06.521314       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:06.521746       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:06.521831       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:06.522370       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:06.522563       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b] <==
	I1008 17:57:58.485779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 17:57:58.491495       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.99]
	I1008 17:57:58.492135       1 controller.go:615] quota admission added evaluator for: endpoints
	I1008 17:57:58.499200       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 17:57:58.903637       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 17:58:00.054350       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 17:58:00.074068       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 17:58:00.230930       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 17:58:03.854509       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1008 17:58:03.954697       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1008 18:00:38.037771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45714: use of closed network connection
	E1008 18:00:38.232043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45744: use of closed network connection
	E1008 18:00:38.418256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45748: use of closed network connection
	E1008 18:00:38.622516       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45768: use of closed network connection
	E1008 18:00:38.796785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45788: use of closed network connection
	E1008 18:00:38.988513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45812: use of closed network connection
	E1008 18:00:39.174560       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45828: use of closed network connection
	E1008 18:00:39.350317       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45850: use of closed network connection
	E1008 18:00:39.525813       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45854: use of closed network connection
	E1008 18:00:39.828048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49850: use of closed network connection
	E1008 18:00:40.000068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49874: use of closed network connection
	E1008 18:00:40.192753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49888: use of closed network connection
	E1008 18:00:40.379456       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49904: use of closed network connection
	E1008 18:00:40.562970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49918: use of closed network connection
	E1008 18:00:40.742948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49938: use of closed network connection
	
	
	==> kube-controller-manager [ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb] <==
	I1008 18:01:09.767306       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-094095-m04" podCIDRs=["10.244.3.0/24"]
	I1008 18:01:09.767482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.015142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.174634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.537159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.265250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.321671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716760       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-094095-m04"
	I1008 18:01:13.777151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:20.033294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108639       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:01:28.124876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.732886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:40.603842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:02:28.755242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.757889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:02:28.778675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.891800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.567817ms"
	I1008 18:02:28.891887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.019µs"
	I1008 18:02:30.013028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:33.959772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	
	
	==> kube-proxy [347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 17:58:05.534485       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 17:58:05.568766       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	E1008 17:58:05.568940       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 17:58:05.609153       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 17:58:05.609181       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 17:58:05.609201       1 server_linux.go:169] "Using iptables Proxier"
	I1008 17:58:05.612762       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 17:58:05.613968       1 server.go:483] "Version info" version="v1.31.1"
	I1008 17:58:05.614042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 17:58:05.616792       1 config.go:199] "Starting service config controller"
	I1008 17:58:05.617139       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 17:58:05.617374       1 config.go:105] "Starting endpoint slice config controller"
	I1008 17:58:05.617451       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 17:58:05.618851       1 config.go:328] "Starting node config controller"
	I1008 17:58:05.619090       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 17:58:05.718484       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 17:58:05.718497       1 shared_informer.go:320] Caches are synced for service config
	I1008 17:58:05.720100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20] <==
	E1008 18:00:30.199446       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rzflt" node="ha-094095-m03"
	E1008 18:00:30.199562       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e0ead4a-bdd7-4fe2-8070-a2e4680f7988(default/busybox-7dff88458-rzflt) was assumed on ha-094095-m03 but assigned to ha-094095-m02" pod="default/busybox-7dff88458-rzflt"
	E1008 18:00:30.201601       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-rzflt"
	I1008 18:00:30.201672       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rzflt" node="ha-094095-m02"
	E1008 18:00:30.241278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.243855       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 00074fc5-40f9-403b-9cec-3f333b177d47(default/busybox-7dff88458-2hz9n) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2hz9n"
	E1008 18:00:30.248134       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-2hz9n"
	I1008 18:00:30.248955       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.302814       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.303201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 399813b8-6199-4631-af76-66e7e8bf4b8c(default/busybox-7dff88458-rxwcg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rxwcg"
	E1008 18:00:30.303327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" pod="default/busybox-7dff88458-rxwcg"
	I1008 18:00:30.303461       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.454050       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-l6wvv\" not found" pod="default/busybox-7dff88458-l6wvv"
	E1008 18:01:09.806729       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.806888       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c9b872af-5075-4c26-99cf-282b077912ee(kube-system/kube-proxy-jjgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jjgsh"
	E1008 18:01:09.806916       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-jjgsh"
	I1008 18:01:09.806962       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.807512       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.807581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2f9978f0-fb58-41fb-ac79-c07ec22f8b12(kube-system/kindnet-jhqlp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jhqlp"
	E1008 18:01:09.807603       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" pod="kube-system/kindnet-jhqlp"
	I1008 18:01:09.807627       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.868191       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	E1008 18:01:09.869875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6257090e-676b-45ea-9261-104b1ba829f3(kube-system/kube-proxy-x5wf6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-x5wf6"
	E1008 18:01:09.871281       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-x5wf6"
	I1008 18:01:09.871556       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	
	
	==> kubelet <==
	Oct 08 18:02:50 ha-094095 kubelet[1309]: E1008 18:02:50.292324    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410570291241415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.254913    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:03:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293753    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293782    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295059    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295735    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297939    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297984    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300086    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300349    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302156    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302530    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304820    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304911    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.254307    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307018    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307069    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.410356501s)
ha_test.go:415: expected profile "ha-094095" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-094095\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-094095\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-094095\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.99\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.65\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.194\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.33\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"m
etallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":
262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (1.324723378s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m03_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:57:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:57:18.946903  548894 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:57:18.947145  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947153  548894 out.go:358] Setting ErrFile to fd 2...
	I1008 17:57:18.947157  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947344  548894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:57:18.947912  548894 out.go:352] Setting JSON to false
	I1008 17:57:18.948876  548894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5991,"bootTime":1728404248,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:57:18.948933  548894 start.go:139] virtualization: kvm guest
	I1008 17:57:18.950969  548894 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:57:18.952033  548894 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:57:18.952082  548894 notify.go:220] Checking for updates...
	I1008 17:57:18.954369  548894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:57:18.955681  548894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:57:18.956842  548894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:18.957830  548894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:57:18.959069  548894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:57:18.960234  548894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:57:18.994761  548894 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 17:57:18.995800  548894 start.go:297] selected driver: kvm2
	I1008 17:57:18.995813  548894 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:57:18.995824  548894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:57:18.996586  548894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:18.996660  548894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:57:19.011273  548894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:57:19.011313  548894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:57:19.011548  548894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:57:19.011585  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:19.011625  548894 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 17:57:19.011636  548894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 17:57:19.011687  548894 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:19.011804  548894 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:19.013449  548894 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 17:57:19.014789  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:19.014817  548894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 17:57:19.014826  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:57:19.014907  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:57:19.014919  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:57:19.015263  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:19.015288  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json: {Name:mk4a4bbfc5e4991434a64e3c2f362f3acde8e751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:19.015419  548894 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:57:19.015446  548894 start.go:364] duration metric: took 15.142µs to acquireMachinesLock for "ha-094095"
	I1008 17:57:19.015463  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:57:19.015507  548894 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 17:57:19.017014  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:57:19.017133  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:57:19.017171  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:57:19.031391  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I1008 17:57:19.031835  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:57:19.032448  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:57:19.032468  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:57:19.032843  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:57:19.033048  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:19.033189  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:19.033336  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:57:19.033367  548894 client.go:168] LocalClient.Create starting
	I1008 17:57:19.033396  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:57:19.033427  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033446  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033499  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:57:19.033517  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033530  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033545  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:57:19.033558  548894 main.go:141] libmachine: (ha-094095) Calling .PreCreateCheck
	I1008 17:57:19.033903  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:19.034253  548894 main.go:141] libmachine: Creating machine...
	I1008 17:57:19.034267  548894 main.go:141] libmachine: (ha-094095) Calling .Create
	I1008 17:57:19.034420  548894 main.go:141] libmachine: (ha-094095) Creating KVM machine...
	I1008 17:57:19.035565  548894 main.go:141] libmachine: (ha-094095) DBG | found existing default KVM network
	I1008 17:57:19.036249  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.036120  548918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1008 17:57:19.036283  548894 main.go:141] libmachine: (ha-094095) DBG | created network xml: 
	I1008 17:57:19.036302  548894 main.go:141] libmachine: (ha-094095) DBG | <network>
	I1008 17:57:19.036314  548894 main.go:141] libmachine: (ha-094095) DBG |   <name>mk-ha-094095</name>
	I1008 17:57:19.036323  548894 main.go:141] libmachine: (ha-094095) DBG |   <dns enable='no'/>
	I1008 17:57:19.036331  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036342  548894 main.go:141] libmachine: (ha-094095) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 17:57:19.036349  548894 main.go:141] libmachine: (ha-094095) DBG |     <dhcp>
	I1008 17:57:19.036361  548894 main.go:141] libmachine: (ha-094095) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 17:57:19.036370  548894 main.go:141] libmachine: (ha-094095) DBG |     </dhcp>
	I1008 17:57:19.036386  548894 main.go:141] libmachine: (ha-094095) DBG |   </ip>
	I1008 17:57:19.036427  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036447  548894 main.go:141] libmachine: (ha-094095) DBG | </network>
	I1008 17:57:19.036455  548894 main.go:141] libmachine: (ha-094095) DBG | 
	I1008 17:57:19.041263  548894 main.go:141] libmachine: (ha-094095) DBG | trying to create private KVM network mk-ha-094095 192.168.39.0/24...
	I1008 17:57:19.105180  548894 main.go:141] libmachine: (ha-094095) DBG | private KVM network mk-ha-094095 192.168.39.0/24 created
	I1008 17:57:19.105208  548894 main.go:141] libmachine: (ha-094095) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.105220  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.105167  548918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.105237  548894 main.go:141] libmachine: (ha-094095) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:57:19.105263  548894 main.go:141] libmachine: (ha-094095) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:57:19.385345  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.385226  548918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa...
	I1008 17:57:19.617977  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617838  548918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk...
	I1008 17:57:19.618008  548894 main.go:141] libmachine: (ha-094095) DBG | Writing magic tar header
	I1008 17:57:19.618021  548894 main.go:141] libmachine: (ha-094095) DBG | Writing SSH key tar header
	I1008 17:57:19.618031  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617973  548918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.618141  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095
	I1008 17:57:19.618165  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 (perms=drwx------)
	I1008 17:57:19.618171  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:57:19.618178  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:57:19.618187  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:57:19.618193  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:57:19.618199  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.618206  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:57:19.618211  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:57:19.618216  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:57:19.618224  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:57:19.618231  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home
	I1008 17:57:19.618238  548894 main.go:141] libmachine: (ha-094095) DBG | Skipping /home - not owner
	I1008 17:57:19.618249  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:57:19.618261  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:19.619347  548894 main.go:141] libmachine: (ha-094095) define libvirt domain using xml: 
	I1008 17:57:19.619369  548894 main.go:141] libmachine: (ha-094095) <domain type='kvm'>
	I1008 17:57:19.619378  548894 main.go:141] libmachine: (ha-094095)   <name>ha-094095</name>
	I1008 17:57:19.619388  548894 main.go:141] libmachine: (ha-094095)   <memory unit='MiB'>2200</memory>
	I1008 17:57:19.619396  548894 main.go:141] libmachine: (ha-094095)   <vcpu>2</vcpu>
	I1008 17:57:19.619402  548894 main.go:141] libmachine: (ha-094095)   <features>
	I1008 17:57:19.619410  548894 main.go:141] libmachine: (ha-094095)     <acpi/>
	I1008 17:57:19.619420  548894 main.go:141] libmachine: (ha-094095)     <apic/>
	I1008 17:57:19.619427  548894 main.go:141] libmachine: (ha-094095)     <pae/>
	I1008 17:57:19.619444  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619470  548894 main.go:141] libmachine: (ha-094095)   </features>
	I1008 17:57:19.619484  548894 main.go:141] libmachine: (ha-094095)   <cpu mode='host-passthrough'>
	I1008 17:57:19.619491  548894 main.go:141] libmachine: (ha-094095)   
	I1008 17:57:19.619500  548894 main.go:141] libmachine: (ha-094095)   </cpu>
	I1008 17:57:19.619506  548894 main.go:141] libmachine: (ha-094095)   <os>
	I1008 17:57:19.619515  548894 main.go:141] libmachine: (ha-094095)     <type>hvm</type>
	I1008 17:57:19.619527  548894 main.go:141] libmachine: (ha-094095)     <boot dev='cdrom'/>
	I1008 17:57:19.619536  548894 main.go:141] libmachine: (ha-094095)     <boot dev='hd'/>
	I1008 17:57:19.619547  548894 main.go:141] libmachine: (ha-094095)     <bootmenu enable='no'/>
	I1008 17:57:19.619559  548894 main.go:141] libmachine: (ha-094095)   </os>
	I1008 17:57:19.619569  548894 main.go:141] libmachine: (ha-094095)   <devices>
	I1008 17:57:19.619578  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='cdrom'>
	I1008 17:57:19.619590  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/boot2docker.iso'/>
	I1008 17:57:19.619601  548894 main.go:141] libmachine: (ha-094095)       <target dev='hdc' bus='scsi'/>
	I1008 17:57:19.619612  548894 main.go:141] libmachine: (ha-094095)       <readonly/>
	I1008 17:57:19.619621  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619648  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='disk'>
	I1008 17:57:19.619669  548894 main.go:141] libmachine: (ha-094095)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:57:19.619678  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk'/>
	I1008 17:57:19.619688  548894 main.go:141] libmachine: (ha-094095)       <target dev='hda' bus='virtio'/>
	I1008 17:57:19.619694  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619711  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619719  548894 main.go:141] libmachine: (ha-094095)       <source network='mk-ha-094095'/>
	I1008 17:57:19.619724  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619731  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619735  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619743  548894 main.go:141] libmachine: (ha-094095)       <source network='default'/>
	I1008 17:57:19.619747  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619752  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619756  548894 main.go:141] libmachine: (ha-094095)     <serial type='pty'>
	I1008 17:57:19.619763  548894 main.go:141] libmachine: (ha-094095)       <target port='0'/>
	I1008 17:57:19.619769  548894 main.go:141] libmachine: (ha-094095)     </serial>
	I1008 17:57:19.619798  548894 main.go:141] libmachine: (ha-094095)     <console type='pty'>
	I1008 17:57:19.619831  548894 main.go:141] libmachine: (ha-094095)       <target type='serial' port='0'/>
	I1008 17:57:19.619844  548894 main.go:141] libmachine: (ha-094095)     </console>
	I1008 17:57:19.619859  548894 main.go:141] libmachine: (ha-094095)     <rng model='virtio'>
	I1008 17:57:19.619885  548894 main.go:141] libmachine: (ha-094095)       <backend model='random'>/dev/random</backend>
	I1008 17:57:19.619895  548894 main.go:141] libmachine: (ha-094095)     </rng>
	I1008 17:57:19.619903  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619912  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619921  548894 main.go:141] libmachine: (ha-094095)   </devices>
	I1008 17:57:19.619930  548894 main.go:141] libmachine: (ha-094095) </domain>
	I1008 17:57:19.619943  548894 main.go:141] libmachine: (ha-094095) 
	I1008 17:57:19.623957  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:c2:1c:c1 in network default
	I1008 17:57:19.624533  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:19.624567  548894 main.go:141] libmachine: (ha-094095) Ensuring networks are active...
	I1008 17:57:19.625167  548894 main.go:141] libmachine: (ha-094095) Ensuring network default is active
	I1008 17:57:19.625513  548894 main.go:141] libmachine: (ha-094095) Ensuring network mk-ha-094095 is active
	I1008 17:57:19.626008  548894 main.go:141] libmachine: (ha-094095) Getting domain xml...
	I1008 17:57:19.626619  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:20.795900  548894 main.go:141] libmachine: (ha-094095) Waiting to get IP...
	I1008 17:57:20.796661  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:20.797068  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:20.797096  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:20.797046  548918 retry.go:31] will retry after 205.911312ms: waiting for machine to come up
	I1008 17:57:21.004526  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.004999  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.005029  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.004943  548918 retry.go:31] will retry after 273.425618ms: waiting for machine to come up
	I1008 17:57:21.280506  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.280861  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.280894  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.280804  548918 retry.go:31] will retry after 435.479274ms: waiting for machine to come up
	I1008 17:57:21.717289  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.717636  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.717662  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.717595  548918 retry.go:31] will retry after 576.307625ms: waiting for machine to come up
	I1008 17:57:22.295076  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.295499  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.295527  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.295461  548918 retry.go:31] will retry after 636.373654ms: waiting for machine to come up
	I1008 17:57:22.933047  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.933364  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.933391  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.933317  548918 retry.go:31] will retry after 741.414571ms: waiting for machine to come up
	I1008 17:57:23.676038  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:23.676368  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:23.676441  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:23.676362  548918 retry.go:31] will retry after 726.748749ms: waiting for machine to come up
	I1008 17:57:24.404401  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:24.404771  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:24.404801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:24.404726  548918 retry.go:31] will retry after 1.449573768s: waiting for machine to come up
	I1008 17:57:25.856490  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:25.856930  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:25.856961  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:25.856877  548918 retry.go:31] will retry after 1.340937339s: waiting for machine to come up
	I1008 17:57:27.199433  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:27.199826  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:27.199863  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:27.199804  548918 retry.go:31] will retry after 1.798441674s: waiting for machine to come up
	I1008 17:57:28.999424  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:28.999921  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:28.999945  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:28.999873  548918 retry.go:31] will retry after 1.937304185s: waiting for machine to come up
	I1008 17:57:30.939309  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:30.939791  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:30.939819  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:30.939738  548918 retry.go:31] will retry after 3.500432638s: waiting for machine to come up
	I1008 17:57:34.441923  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:34.442356  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:34.442385  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:34.442290  548918 retry.go:31] will retry after 3.09089187s: waiting for machine to come up
	I1008 17:57:37.536439  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:37.536781  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:37.536801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:37.536736  548918 retry.go:31] will retry after 5.395822577s: waiting for machine to come up
	I1008 17:57:42.937057  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937477  548894 main.go:141] libmachine: (ha-094095) Found IP for machine: 192.168.39.99
	I1008 17:57:42.937503  548894 main.go:141] libmachine: (ha-094095) Reserving static IP address...
	I1008 17:57:42.937532  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has current primary IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937886  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find host DHCP lease matching {name: "ha-094095", mac: "52:54:00:bf:fa:3a", ip: "192.168.39.99"} in network mk-ha-094095
	I1008 17:57:43.006083  548894 main.go:141] libmachine: (ha-094095) DBG | Getting to WaitForSSH function...
	I1008 17:57:43.006114  548894 main.go:141] libmachine: (ha-094095) Reserved static IP address: 192.168.39.99
	I1008 17:57:43.006128  548894 main.go:141] libmachine: (ha-094095) Waiting for SSH to be available...
	I1008 17:57:43.008468  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.008879  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.008907  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.009020  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH client type: external
	I1008 17:57:43.009041  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa (-rw-------)
	I1008 17:57:43.009062  548894 main.go:141] libmachine: (ha-094095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:57:43.009119  548894 main.go:141] libmachine: (ha-094095) DBG | About to run SSH command:
	I1008 17:57:43.009138  548894 main.go:141] libmachine: (ha-094095) DBG | exit 0
	I1008 17:57:43.130112  548894 main.go:141] libmachine: (ha-094095) DBG | SSH cmd err, output: <nil>: 
	I1008 17:57:43.130367  548894 main.go:141] libmachine: (ha-094095) KVM machine creation complete!
	I1008 17:57:43.130653  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:43.131203  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131384  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131553  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:57:43.131567  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:57:43.132696  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:57:43.132710  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:57:43.132718  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:57:43.132724  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.134855  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135157  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.135186  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135341  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.135500  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135635  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135753  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.135900  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.136116  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.136132  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:57:43.237532  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.237562  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:57:43.237573  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.240102  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240361  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.240386  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240541  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.240728  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.240888  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.241033  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.241194  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.241372  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.241387  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:57:43.342754  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:57:43.342848  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:57:43.342862  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:57:43.342875  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343129  548894 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 17:57:43.343169  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343355  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.345781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346150  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.346172  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346401  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.346572  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346747  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346898  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.347071  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.347247  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.347259  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 17:57:43.463654  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 17:57:43.463696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.466255  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466646  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.466682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466840  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.467010  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467143  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467243  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.467378  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.467581  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.467603  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:57:43.579438  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.579474  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:57:43.579515  548894 buildroot.go:174] setting up certificates
	I1008 17:57:43.579525  548894 provision.go:84] configureAuth start
	I1008 17:57:43.579536  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.579814  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:43.582136  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582503  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.582528  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.584820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585187  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.585207  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585310  548894 provision.go:143] copyHostCerts
	I1008 17:57:43.585352  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585401  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:57:43.585412  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585494  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:57:43.585624  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585659  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:57:43.585677  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585716  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:57:43.585797  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585818  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:57:43.585827  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585862  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:57:43.585945  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 17:57:43.673469  548894 provision.go:177] copyRemoteCerts
	I1008 17:57:43.673538  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:57:43.673570  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.676617  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.676907  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.676942  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.677124  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.677287  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.677489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.677596  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:43.759344  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:57:43.759416  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 17:57:43.781917  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:57:43.781981  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:57:43.804256  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:57:43.804312  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:57:43.826921  548894 provision.go:87] duration metric: took 247.384803ms to configureAuth
	I1008 17:57:43.826944  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:57:43.827107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:57:43.827185  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.830340  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830654  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.830685  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830917  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.831091  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831234  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831362  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.831590  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.831761  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.831775  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:57:44.043562  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:57:44.043593  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:57:44.043602  548894 main.go:141] libmachine: (ha-094095) Calling .GetURL
	I1008 17:57:44.044870  548894 main.go:141] libmachine: (ha-094095) DBG | Using libvirt version 6000000
	I1008 17:57:44.047119  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047449  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.047478  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047637  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:57:44.047652  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:57:44.047661  548894 client.go:171] duration metric: took 25.014282218s to LocalClient.Create
	I1008 17:57:44.047690  548894 start.go:167] duration metric: took 25.014354001s to libmachine.API.Create "ha-094095"
	I1008 17:57:44.047702  548894 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 17:57:44.047716  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:57:44.047739  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.048014  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:57:44.048045  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.050022  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050306  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.050347  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050505  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.050666  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.050837  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.050949  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.132504  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:57:44.136621  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:57:44.136645  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:57:44.136713  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:57:44.136806  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:57:44.136818  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:57:44.136924  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:57:44.146103  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:44.168356  548894 start.go:296] duration metric: took 120.640584ms for postStartSetup
	I1008 17:57:44.168411  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:44.169087  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.172425  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.172799  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.172823  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.173056  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:44.173256  548894 start.go:128] duration metric: took 25.157738621s to createHost
	I1008 17:57:44.173281  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.175394  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175685  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.175711  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175872  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.176022  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176162  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176257  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.176381  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:44.176571  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:44.176587  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:57:44.278668  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410264.248509692
	
	I1008 17:57:44.278691  548894 fix.go:216] guest clock: 1728410264.248509692
	I1008 17:57:44.278710  548894 fix.go:229] Guest: 2024-10-08 17:57:44.248509692 +0000 UTC Remote: 2024-10-08 17:57:44.173269639 +0000 UTC m=+25.264229848 (delta=75.240053ms)
	I1008 17:57:44.278730  548894 fix.go:200] guest clock delta is within tolerance: 75.240053ms
	I1008 17:57:44.278735  548894 start.go:83] releasing machines lock for "ha-094095", held for 25.26328044s
	I1008 17:57:44.278761  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.279011  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.281403  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281704  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.281728  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281844  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282331  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282492  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282608  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:57:44.282649  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.282695  548894 ssh_runner.go:195] Run: cat /version.json
	I1008 17:57:44.282718  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.285197  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285467  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285561  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285596  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285720  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.285878  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.285947  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285972  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.286009  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286152  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.286166  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.286407  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.286555  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286685  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.362923  548894 ssh_runner.go:195] Run: systemctl --version
	I1008 17:57:44.382917  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:57:44.543848  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:57:44.549734  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:57:44.549799  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:57:44.566434  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:57:44.566456  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:57:44.566531  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:57:44.582382  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:57:44.595796  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:57:44.595845  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:57:44.608932  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:57:44.621723  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:57:44.737514  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:57:44.894846  548894 docker.go:233] disabling docker service ...
	I1008 17:57:44.894913  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:57:44.908802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:57:44.920944  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:57:45.040515  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:57:45.156709  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:57:45.170339  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:57:45.188088  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:57:45.188162  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.197887  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:57:45.197965  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.207765  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.217192  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.226820  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:57:45.236401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.246021  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.261908  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.271409  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:57:45.280221  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:57:45.280279  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:57:45.293099  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:57:45.301781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:45.406440  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:57:45.492188  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:57:45.492292  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:57:45.496696  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:57:45.496749  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:57:45.500380  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:57:45.538828  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:57:45.538916  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.566412  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.594012  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:57:45.595183  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:45.597820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598135  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:45.598169  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598406  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:57:45.602368  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:45.614968  548894 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 17:57:45.615076  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:45.615144  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:45.645417  548894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 17:57:45.645488  548894 ssh_runner.go:195] Run: which lz4
	I1008 17:57:45.649242  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1008 17:57:45.649331  548894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 17:57:45.653358  548894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 17:57:45.653398  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 17:57:46.900415  548894 crio.go:462] duration metric: took 1.251111162s to copy over tarball
	I1008 17:57:46.900502  548894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 17:57:48.824951  548894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.92441022s)
	I1008 17:57:48.824989  548894 crio.go:469] duration metric: took 1.924546326s to extract the tarball
	I1008 17:57:48.825000  548894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 17:57:48.862916  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:48.914586  548894 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 17:57:48.914611  548894 cache_images.go:84] Images are preloaded, skipping loading
	I1008 17:57:48.914620  548894 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 17:57:48.914713  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:57:48.914782  548894 ssh_runner.go:195] Run: crio config
	I1008 17:57:48.965231  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:48.965254  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:57:48.965272  548894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 17:57:48.965293  548894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 17:57:48.965430  548894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 17:57:48.965457  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:57:48.965957  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:57:48.984862  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:57:48.984960  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:57:48.985020  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:57:48.994069  548894 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 17:57:48.994134  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 17:57:49.003013  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 17:57:49.018952  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:57:49.034270  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 17:57:49.049856  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1008 17:57:49.065212  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:57:49.068890  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:49.080238  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:49.207273  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:57:49.224685  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 17:57:49.224709  548894 certs.go:194] generating shared ca certs ...
	I1008 17:57:49.224731  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.224901  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:57:49.224958  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:57:49.224972  548894 certs.go:256] generating profile certs ...
	I1008 17:57:49.225044  548894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:57:49.225073  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt with IP's: []
	I1008 17:57:49.321305  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt ...
	I1008 17:57:49.321342  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt: {Name:mkc9007ec871f6b1b480e3b611a05707a64a5848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321530  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key ...
	I1008 17:57:49.321546  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key: {Name:mke9b241dc151acd2e67df3e03efa92395ed4dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321647  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc
	I1008 17:57:49.321666  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.254]
	I1008 17:57:49.615476  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc ...
	I1008 17:57:49.615508  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc: {Name:mk28ddc8f9cdc62c03babb0aa78423717078ec15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615696  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc ...
	I1008 17:57:49.615715  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc: {Name:mk7165300ee0dd42df7c546caae76a339625e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615817  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:57:49.615941  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:57:49.616029  548894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:57:49.616053  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt with IP's: []
	I1008 17:57:49.700382  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt ...
	I1008 17:57:49.700415  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt: {Name:mk23273db76b4a6b0f12257e27a1a06fa6830ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700587  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key ...
	I1008 17:57:49.700602  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key: {Name:mk0eecaa249eaee41f1ee6112c7eb1585a4e7c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700724  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:57:49.700753  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:57:49.700768  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:57:49.700784  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:57:49.700811  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:57:49.700836  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:57:49.700855  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:57:49.700874  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:57:49.700934  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:57:49.700987  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:57:49.701002  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:57:49.701037  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:57:49.701072  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:57:49.701103  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:57:49.701155  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:49.701193  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:49.701232  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:57:49.701259  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:57:49.701875  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:57:49.727666  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:57:49.750886  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:57:49.773442  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:57:49.797562  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 17:57:49.820463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:57:49.843011  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:57:49.866615  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:57:49.889741  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:57:49.912810  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:57:49.936333  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:57:49.960454  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 17:57:49.979469  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:57:49.985669  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:57:49.997465  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003200  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003257  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.009543  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:57:50.024695  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:57:50.038764  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044608  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044730  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.050835  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:57:50.061168  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:57:50.071347  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075705  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075749  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.081172  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:57:50.091550  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:57:50.095476  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:57:50.095534  548894 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:50.095625  548894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 17:57:50.095693  548894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 17:57:50.141057  548894 cri.go:89] found id: ""
	I1008 17:57:50.141128  548894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 17:57:50.155661  548894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 17:57:50.164965  548894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 17:57:50.174132  548894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 17:57:50.174150  548894 kubeadm.go:157] found existing configuration files:
	
	I1008 17:57:50.174193  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 17:57:50.182760  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 17:57:50.182801  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 17:57:50.191921  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 17:57:50.200321  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 17:57:50.200379  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 17:57:50.209419  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.217728  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 17:57:50.217774  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.226543  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 17:57:50.234817  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 17:57:50.234864  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 17:57:50.243553  548894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 17:57:50.351407  548894 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 17:57:50.351505  548894 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 17:57:50.448058  548894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 17:57:50.448219  548894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 17:57:50.448390  548894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 17:57:50.458228  548894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 17:57:50.561945  548894 out.go:235]   - Generating certificates and keys ...
	I1008 17:57:50.562071  548894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 17:57:50.562160  548894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 17:57:50.581396  548894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 17:57:50.643567  548894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 17:57:50.777590  548894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 17:57:50.908209  548894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 17:57:51.030015  548894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 17:57:51.030180  548894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.147196  548894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 17:57:51.147407  548894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.301954  548894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 17:57:51.401522  548894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 17:57:51.537212  548894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 17:57:51.537477  548894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 17:57:51.996984  548894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 17:57:52.232782  548894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 17:57:52.360403  548894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 17:57:52.550793  548894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 17:57:52.645896  548894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 17:57:52.646431  548894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 17:57:52.649705  548894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 17:57:52.693095  548894 out.go:235]   - Booting up control plane ...
	I1008 17:57:52.693231  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 17:57:52.693301  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 17:57:52.693399  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 17:57:52.693595  548894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 17:57:52.693726  548894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 17:57:52.693765  548894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 17:57:52.808206  548894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 17:57:52.808366  548894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 17:57:53.309429  548894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.545044ms
	I1008 17:57:53.309511  548894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 17:57:59.231916  548894 kubeadm.go:310] [api-check] The API server is healthy after 5.925563733s
	I1008 17:57:59.243298  548894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 17:57:59.259662  548894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 17:57:59.788254  548894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 17:57:59.788485  548894 kubeadm.go:310] [mark-control-plane] Marking the node ha-094095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 17:57:59.797286  548894 kubeadm.go:310] [bootstrap-token] Using token: 3mfy3k.85hms8dtl8svlvkm
	I1008 17:57:59.798387  548894 out.go:235]   - Configuring RBAC rules ...
	I1008 17:57:59.798518  548894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 17:57:59.805485  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 17:57:59.816460  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 17:57:59.820883  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 17:57:59.823643  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 17:57:59.826562  548894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 17:57:59.838159  548894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 17:58:00.096325  548894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 17:58:00.637130  548894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 17:58:00.638100  548894 kubeadm.go:310] 
	I1008 17:58:00.638187  548894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 17:58:00.638198  548894 kubeadm.go:310] 
	I1008 17:58:00.638289  548894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 17:58:00.638337  548894 kubeadm.go:310] 
	I1008 17:58:00.638388  548894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 17:58:00.638476  548894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 17:58:00.638558  548894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 17:58:00.638573  548894 kubeadm.go:310] 
	I1008 17:58:00.638644  548894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 17:58:00.638654  548894 kubeadm.go:310] 
	I1008 17:58:00.638715  548894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 17:58:00.638725  548894 kubeadm.go:310] 
	I1008 17:58:00.638784  548894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 17:58:00.638864  548894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 17:58:00.638920  548894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 17:58:00.638927  548894 kubeadm.go:310] 
	I1008 17:58:00.638996  548894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 17:58:00.639061  548894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 17:58:00.639067  548894 kubeadm.go:310] 
	I1008 17:58:00.639138  548894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639257  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 17:58:00.639298  548894 kubeadm.go:310] 	--control-plane 
	I1008 17:58:00.639308  548894 kubeadm.go:310] 
	I1008 17:58:00.639444  548894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 17:58:00.639453  548894 kubeadm.go:310] 
	I1008 17:58:00.639547  548894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639692  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 17:58:00.640765  548894 kubeadm.go:310] W1008 17:57:50.322627     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.640999  548894 kubeadm.go:310] W1008 17:57:50.323512     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.641121  548894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 17:58:00.641159  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:58:00.641169  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:58:00.643434  548894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1008 17:58:00.644444  548894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 17:58:00.650209  548894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1008 17:58:00.650224  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 17:58:00.677687  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 17:58:01.011782  548894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 17:58:01.011872  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.011918  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095 minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=true
	I1008 17:58:01.050127  548894 ops.go:34] apiserver oom_adj: -16
	I1008 17:58:01.121355  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.622435  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.121789  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.621637  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.121512  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.621993  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.121641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.621728  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.753917  548894 kubeadm.go:1113] duration metric: took 3.742110374s to wait for elevateKubeSystemPrivileges
	I1008 17:58:04.753962  548894 kubeadm.go:394] duration metric: took 14.658436547s to StartCluster
	I1008 17:58:04.753985  548894 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.754071  548894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.755006  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.755245  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 17:58:04.755258  548894 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:04.755285  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:58:04.755305  548894 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 17:58:04.755395  548894 addons.go:69] Setting storage-provisioner=true in profile "ha-094095"
	I1008 17:58:04.755421  548894 addons.go:234] Setting addon storage-provisioner=true in "ha-094095"
	I1008 17:58:04.755423  548894 addons.go:69] Setting default-storageclass=true in profile "ha-094095"
	I1008 17:58:04.755450  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.755463  548894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-094095"
	I1008 17:58:04.755954  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:04.756015  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756060  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.756153  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756178  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.771314  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I1008 17:58:04.771411  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1008 17:58:04.771715  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.771845  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.772259  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772280  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772399  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772421  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772677  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772761  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772921  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.773166  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.773207  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.775127  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.775464  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 17:58:04.776098  548894 cert_rotation.go:140] Starting client certificate rotation controller
	I1008 17:58:04.776464  548894 addons.go:234] Setting addon default-storageclass=true in "ha-094095"
	I1008 17:58:04.776513  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.776901  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.776950  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.788872  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I1008 17:58:04.789408  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.789954  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.789982  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.790391  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.790585  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.791166  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1008 17:58:04.791602  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.792075  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.792102  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.792300  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.792437  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.792883  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.792921  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.794070  548894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 17:58:04.795292  548894 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:04.795314  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 17:58:04.795333  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.798275  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798778  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.798817  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798959  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.799152  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.799319  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.799447  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.807217  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1008 17:58:04.807681  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.808084  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.808108  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.808466  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.808664  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.810084  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.810282  548894 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:04.810305  548894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 17:58:04.810351  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.813002  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813401  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.813426  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813628  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.813798  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.813951  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.814091  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.894935  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 17:58:04.989822  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:05.005242  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:05.480020  548894 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 17:58:05.749086  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749116  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749148  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749170  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749410  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749425  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749434  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749440  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749521  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749536  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749550  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749557  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749608  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749908  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749943  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750036  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749970  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.750103  548894 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 17:58:05.749988  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750114  548894 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 17:58:05.750160  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.750219  548894 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1008 17:58:05.750231  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.750241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.750250  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.762332  548894 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1008 17:58:05.763152  548894 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1008 17:58:05.763172  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.763185  548894 round_trippers.go:473]     Content-Type: application/json
	I1008 17:58:05.763193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.763197  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.765314  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:58:05.765554  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.765571  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.765856  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.765872  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.765886  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.768201  548894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1008 17:58:05.769166  548894 addons.go:510] duration metric: took 1.013864152s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 17:58:05.769206  548894 start.go:246] waiting for cluster config update ...
	I1008 17:58:05.769221  548894 start.go:255] writing updated cluster config ...
	I1008 17:58:05.770624  548894 out.go:201] 
	I1008 17:58:05.771889  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:05.771979  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.773435  548894 out.go:177] * Starting "ha-094095-m02" control-plane node in "ha-094095" cluster
	I1008 17:58:05.774389  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:58:05.774416  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:58:05.774517  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:58:05.774543  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:58:05.774635  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.774827  548894 start.go:360] acquireMachinesLock for ha-094095-m02: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:58:05.774885  548894 start.go:364] duration metric: took 34.657µs to acquireMachinesLock for "ha-094095-m02"
	I1008 17:58:05.774908  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:05.775005  548894 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1008 17:58:05.776351  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:58:05.776440  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:05.776482  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:05.791492  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I1008 17:58:05.791992  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:05.792464  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:05.792487  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:05.792786  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:05.792949  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:05.793054  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:05.793160  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:58:05.793192  548894 client.go:168] LocalClient.Create starting
	I1008 17:58:05.793230  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:58:05.793268  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793289  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793356  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:58:05.793382  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793399  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793425  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:58:05.793436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .PreCreateCheck
	I1008 17:58:05.793636  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:05.793961  548894 main.go:141] libmachine: Creating machine...
	I1008 17:58:05.793974  548894 main.go:141] libmachine: (ha-094095-m02) Calling .Create
	I1008 17:58:05.794087  548894 main.go:141] libmachine: (ha-094095-m02) Creating KVM machine...
	I1008 17:58:05.795174  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing default KVM network
	I1008 17:58:05.795373  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing private KVM network mk-ha-094095
	I1008 17:58:05.795488  548894 main.go:141] libmachine: (ha-094095-m02) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:05.795518  548894 main.go:141] libmachine: (ha-094095-m02) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:58:05.795590  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:05.795498  549282 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:05.795693  548894 main.go:141] libmachine: (ha-094095-m02) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:58:06.080254  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.080126  549282 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa...
	I1008 17:58:06.408665  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408546  549282 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk...
	I1008 17:58:06.408701  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing magic tar header
	I1008 17:58:06.408716  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing SSH key tar header
	I1008 17:58:06.408729  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408669  549282 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:06.408798  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02
	I1008 17:58:06.408863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:58:06.408916  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 (perms=drwx------)
	I1008 17:58:06.408935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:06.408946  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:58:06.408954  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:58:06.408966  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:58:06.408972  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home
	I1008 17:58:06.408988  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Skipping /home - not owner
	I1008 17:58:06.409003  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:58:06.409013  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:58:06.409022  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:58:06.409038  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:58:06.409050  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:58:06.409060  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:06.410262  548894 main.go:141] libmachine: (ha-094095-m02) define libvirt domain using xml: 
	I1008 17:58:06.410280  548894 main.go:141] libmachine: (ha-094095-m02) <domain type='kvm'>
	I1008 17:58:06.410300  548894 main.go:141] libmachine: (ha-094095-m02)   <name>ha-094095-m02</name>
	I1008 17:58:06.410310  548894 main.go:141] libmachine: (ha-094095-m02)   <memory unit='MiB'>2200</memory>
	I1008 17:58:06.410330  548894 main.go:141] libmachine: (ha-094095-m02)   <vcpu>2</vcpu>
	I1008 17:58:06.410344  548894 main.go:141] libmachine: (ha-094095-m02)   <features>
	I1008 17:58:06.410353  548894 main.go:141] libmachine: (ha-094095-m02)     <acpi/>
	I1008 17:58:06.410361  548894 main.go:141] libmachine: (ha-094095-m02)     <apic/>
	I1008 17:58:06.410367  548894 main.go:141] libmachine: (ha-094095-m02)     <pae/>
	I1008 17:58:06.410371  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410376  548894 main.go:141] libmachine: (ha-094095-m02)   </features>
	I1008 17:58:06.410383  548894 main.go:141] libmachine: (ha-094095-m02)   <cpu mode='host-passthrough'>
	I1008 17:58:06.410388  548894 main.go:141] libmachine: (ha-094095-m02)   
	I1008 17:58:06.410392  548894 main.go:141] libmachine: (ha-094095-m02)   </cpu>
	I1008 17:58:06.410397  548894 main.go:141] libmachine: (ha-094095-m02)   <os>
	I1008 17:58:06.410403  548894 main.go:141] libmachine: (ha-094095-m02)     <type>hvm</type>
	I1008 17:58:06.410408  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='cdrom'/>
	I1008 17:58:06.410418  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='hd'/>
	I1008 17:58:06.410430  548894 main.go:141] libmachine: (ha-094095-m02)     <bootmenu enable='no'/>
	I1008 17:58:06.410440  548894 main.go:141] libmachine: (ha-094095-m02)   </os>
	I1008 17:58:06.410448  548894 main.go:141] libmachine: (ha-094095-m02)   <devices>
	I1008 17:58:06.410456  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='cdrom'>
	I1008 17:58:06.410468  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/boot2docker.iso'/>
	I1008 17:58:06.410474  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hdc' bus='scsi'/>
	I1008 17:58:06.410479  548894 main.go:141] libmachine: (ha-094095-m02)       <readonly/>
	I1008 17:58:06.410485  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410515  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='disk'>
	I1008 17:58:06.410542  548894 main.go:141] libmachine: (ha-094095-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:58:06.410557  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk'/>
	I1008 17:58:06.410568  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hda' bus='virtio'/>
	I1008 17:58:06.410582  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410592  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410604  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='mk-ha-094095'/>
	I1008 17:58:06.410613  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410622  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410630  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410642  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='default'/>
	I1008 17:58:06.410661  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410673  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410683  548894 main.go:141] libmachine: (ha-094095-m02)     <serial type='pty'>
	I1008 17:58:06.410692  548894 main.go:141] libmachine: (ha-094095-m02)       <target port='0'/>
	I1008 17:58:06.410700  548894 main.go:141] libmachine: (ha-094095-m02)     </serial>
	I1008 17:58:06.410712  548894 main.go:141] libmachine: (ha-094095-m02)     <console type='pty'>
	I1008 17:58:06.410727  548894 main.go:141] libmachine: (ha-094095-m02)       <target type='serial' port='0'/>
	I1008 17:58:06.410741  548894 main.go:141] libmachine: (ha-094095-m02)     </console>
	I1008 17:58:06.410750  548894 main.go:141] libmachine: (ha-094095-m02)     <rng model='virtio'>
	I1008 17:58:06.410761  548894 main.go:141] libmachine: (ha-094095-m02)       <backend model='random'>/dev/random</backend>
	I1008 17:58:06.410771  548894 main.go:141] libmachine: (ha-094095-m02)     </rng>
	I1008 17:58:06.410780  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410787  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410796  548894 main.go:141] libmachine: (ha-094095-m02)   </devices>
	I1008 17:58:06.410804  548894 main.go:141] libmachine: (ha-094095-m02) </domain>
	I1008 17:58:06.410828  548894 main.go:141] libmachine: (ha-094095-m02) 
	I1008 17:58:06.418030  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:0f:fc:b1 in network default
	I1008 17:58:06.418595  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring networks are active...
	I1008 17:58:06.418616  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:06.419273  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network default is active
	I1008 17:58:06.419679  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network mk-ha-094095 is active
	I1008 17:58:06.420099  548894 main.go:141] libmachine: (ha-094095-m02) Getting domain xml...
	I1008 17:58:06.420774  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:07.625613  548894 main.go:141] libmachine: (ha-094095-m02) Waiting to get IP...
	I1008 17:58:07.626394  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.626834  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.626863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.626812  549282 retry.go:31] will retry after 298.191028ms: waiting for machine to come up
	I1008 17:58:07.926517  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.926935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.926967  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.926892  549282 retry.go:31] will retry after 251.007436ms: waiting for machine to come up
	I1008 17:58:08.179311  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.179723  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.179753  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.179684  549282 retry.go:31] will retry after 369.990509ms: waiting for machine to come up
	I1008 17:58:08.551209  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.551664  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.551688  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.551618  549282 retry.go:31] will retry after 529.446819ms: waiting for machine to come up
	I1008 17:58:09.082289  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.082764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.082787  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.082730  549282 retry.go:31] will retry after 698.772609ms: waiting for machine to come up
	I1008 17:58:09.782428  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.783035  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.783077  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.782975  549282 retry.go:31] will retry after 749.123701ms: waiting for machine to come up
	I1008 17:58:10.533886  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:10.534374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:10.534406  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:10.534314  549282 retry.go:31] will retry after 748.167347ms: waiting for machine to come up
	I1008 17:58:11.284374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:11.284764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:11.284793  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:11.284726  549282 retry.go:31] will retry after 1.314312212s: waiting for machine to come up
	I1008 17:58:12.600256  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:12.600675  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:12.600706  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:12.600619  549282 retry.go:31] will retry after 1.264771643s: waiting for machine to come up
	I1008 17:58:13.867255  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:13.867784  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:13.867816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:13.867728  549282 retry.go:31] will retry after 2.081210662s: waiting for machine to come up
	I1008 17:58:15.950893  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:15.951309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:15.951341  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:15.951258  549282 retry.go:31] will retry after 2.823132453s: waiting for machine to come up
	I1008 17:58:18.778198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:18.778573  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:18.778605  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:18.778535  549282 retry.go:31] will retry after 2.715237967s: waiting for machine to come up
	I1008 17:58:21.495309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:21.495754  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:21.495780  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:21.495712  549282 retry.go:31] will retry after 2.962404474s: waiting for machine to come up
	I1008 17:58:24.461815  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:24.462170  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:24.462198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:24.462131  549282 retry.go:31] will retry after 4.711440731s: waiting for machine to come up
	I1008 17:58:29.176935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177439  548894 main.go:141] libmachine: (ha-094095-m02) Found IP for machine: 192.168.39.65
	I1008 17:58:29.177459  548894 main.go:141] libmachine: (ha-094095-m02) Reserving static IP address...
	I1008 17:58:29.177467  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177881  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find host DHCP lease matching {name: "ha-094095-m02", mac: "52:54:00:28:c9:b2", ip: "192.168.39.65"} in network mk-ha-094095
	I1008 17:58:29.250979  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Getting to WaitForSSH function...
	I1008 17:58:29.251007  548894 main.go:141] libmachine: (ha-094095-m02) Reserved static IP address: 192.168.39.65
	I1008 17:58:29.251020  548894 main.go:141] libmachine: (ha-094095-m02) Waiting for SSH to be available...
	I1008 17:58:29.253304  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253715  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.253745  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253826  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH client type: external
	I1008 17:58:29.253858  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa (-rw-------)
	I1008 17:58:29.253895  548894 main.go:141] libmachine: (ha-094095-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:58:29.253928  548894 main.go:141] libmachine: (ha-094095-m02) DBG | About to run SSH command:
	I1008 17:58:29.253953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | exit 0
	I1008 17:58:29.377997  548894 main.go:141] libmachine: (ha-094095-m02) DBG | SSH cmd err, output: <nil>: 
	I1008 17:58:29.378287  548894 main.go:141] libmachine: (ha-094095-m02) KVM machine creation complete!
	I1008 17:58:29.378621  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:29.379167  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379376  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379500  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:58:29.379514  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 17:58:29.380658  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:58:29.380670  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:58:29.380676  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:58:29.380683  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.382734  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383074  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.383097  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383251  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.383416  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383613  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383753  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.383914  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.384122  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.384133  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:58:29.485427  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.485449  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:58:29.485460  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.488012  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488364  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.488395  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488586  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.488786  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.488953  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.489087  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.489247  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.489514  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.489530  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:58:29.590445  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:58:29.590532  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:58:29.590542  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:58:29.590551  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.590782  548894 buildroot.go:166] provisioning hostname "ha-094095-m02"
	I1008 17:58:29.590806  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.591021  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.593666  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594067  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.594096  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594246  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.594404  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594554  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594724  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.594891  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.595109  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.595125  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m02 && echo "ha-094095-m02" | sudo tee /etc/hostname
	I1008 17:58:29.714147  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m02
	
	I1008 17:58:29.714180  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.716973  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717353  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.717384  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717565  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.717752  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.717913  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.718050  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.718222  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.718416  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.718433  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:58:29.831586  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.831619  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:58:29.831636  548894 buildroot.go:174] setting up certificates
	I1008 17:58:29.831645  548894 provision.go:84] configureAuth start
	I1008 17:58:29.831659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.831944  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:29.834827  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835217  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.835237  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.837816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.838223  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838374  548894 provision.go:143] copyHostCerts
	I1008 17:58:29.838406  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838440  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:58:29.838448  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838513  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:58:29.838598  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838615  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:58:29.838620  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838643  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:58:29.838682  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838698  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:58:29.838704  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838730  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:58:29.838774  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m02 san=[127.0.0.1 192.168.39.65 ha-094095-m02 localhost minikube]
	I1008 17:58:29.938554  548894 provision.go:177] copyRemoteCerts
	I1008 17:58:29.938614  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:58:29.938646  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.941344  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941644  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.941673  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941805  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.942003  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.942163  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.942301  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.024548  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:58:30.024622  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:58:30.049270  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:58:30.049353  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:58:30.073294  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:58:30.073363  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:58:30.097034  548894 provision.go:87] duration metric: took 265.374667ms to configureAuth
	I1008 17:58:30.097066  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:58:30.097258  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:30.097336  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.100086  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100367  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.100397  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100547  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.100709  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.100901  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.101076  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.101293  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.101528  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.101554  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:58:30.316444  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:58:30.316471  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:58:30.316479  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetURL
	I1008 17:58:30.317802  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using libvirt version 6000000
	I1008 17:58:30.320137  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320544  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.320587  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320709  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:58:30.320718  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:58:30.320726  548894 client.go:171] duration metric: took 24.527519698s to LocalClient.Create
	I1008 17:58:30.320756  548894 start.go:167] duration metric: took 24.527598536s to libmachine.API.Create "ha-094095"
	I1008 17:58:30.320770  548894 start.go:293] postStartSetup for "ha-094095-m02" (driver="kvm2")
	I1008 17:58:30.320783  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:58:30.320822  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.321070  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:58:30.321097  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.323268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323601  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.323630  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323770  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.323934  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.324073  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.324173  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.408962  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:58:30.413084  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:58:30.413110  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:58:30.413178  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:58:30.413266  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:58:30.413279  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:58:30.413385  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:58:30.423213  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:30.446502  548894 start.go:296] duration metric: took 125.715217ms for postStartSetup
	I1008 17:58:30.446572  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:30.447199  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.449851  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450235  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.450268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450469  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:30.450701  548894 start.go:128] duration metric: took 24.675682473s to createHost
	I1008 17:58:30.450743  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.453038  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453348  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.453375  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.453697  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.453857  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.454010  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.454159  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.454400  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.454410  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:58:30.559077  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410310.517666608
	
	I1008 17:58:30.559107  548894 fix.go:216] guest clock: 1728410310.517666608
	I1008 17:58:30.559114  548894 fix.go:229] Guest: 2024-10-08 17:58:30.517666608 +0000 UTC Remote: 2024-10-08 17:58:30.45071757 +0000 UTC m=+71.541677784 (delta=66.949038ms)
	I1008 17:58:30.559131  548894 fix.go:200] guest clock delta is within tolerance: 66.949038ms
	I1008 17:58:30.559136  548894 start.go:83] releasing machines lock for "ha-094095-m02", held for 24.78424013s
	I1008 17:58:30.559157  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.559409  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.562379  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.562717  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.562741  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.564989  548894 out.go:177] * Found network options:
	I1008 17:58:30.566270  548894 out.go:177]   - NO_PROXY=192.168.39.99
	W1008 17:58:30.567463  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.567496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568070  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568303  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568423  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:58:30.568473  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	W1008 17:58:30.568503  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.568602  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:58:30.568624  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.570953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571141  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571291  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571315  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571468  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571489  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571498  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571671  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572011  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572054  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.572151  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.807329  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:58:30.813213  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:58:30.813287  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:58:30.829683  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:58:30.829708  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:58:30.829790  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:58:30.845021  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:58:30.858172  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:58:30.858226  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:58:30.871442  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:58:30.884200  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:58:31.001594  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:58:31.145565  548894 docker.go:233] disabling docker service ...
	I1008 17:58:31.145647  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:58:31.159802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:58:31.172545  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:58:31.317614  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:58:31.428085  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:58:31.441474  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:58:31.458921  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:58:31.458992  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.469332  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:58:31.469401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.479553  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.489606  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.499476  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:58:31.509618  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.519561  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.536177  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.546145  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:58:31.555445  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:58:31.555504  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:58:31.568401  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:58:31.577660  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:31.690206  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:58:31.785577  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:58:31.785668  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:58:31.790440  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:58:31.790488  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:58:31.794008  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:58:31.830698  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:58:31.830779  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.860448  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.888491  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:58:31.889686  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:58:31.890999  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:31.893749  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894085  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:31.894111  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894298  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:58:31.898872  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:31.911229  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:58:31.911431  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:31.911784  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.911827  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.926475  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I1008 17:58:31.926940  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.927427  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.927446  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.927739  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.927928  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:31.929331  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:31.929604  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.929636  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.944569  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1008 17:58:31.945071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.945554  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.945577  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.945884  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.946077  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:31.946243  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.65
	I1008 17:58:31.946257  548894 certs.go:194] generating shared ca certs ...
	I1008 17:58:31.946274  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:31.946447  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:58:31.946488  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:58:31.946503  548894 certs.go:256] generating profile certs ...
	I1008 17:58:31.946591  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:58:31.946614  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9
	I1008 17:58:31.946631  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.254]
	I1008 17:58:32.004758  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 ...
	I1008 17:58:32.004782  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9: {Name:mk5f5c650d9dd5d2249fb843b585c028b52aecec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.004936  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 ...
	I1008 17:58:32.004948  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9: {Name:mk72de6dbb470530f019dc623057311deeb636c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.005014  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:58:32.005145  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:58:32.005267  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:58:32.005283  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:58:32.005296  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:58:32.005308  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:58:32.005321  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:58:32.005335  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:58:32.005348  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:58:32.005359  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:58:32.005370  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:58:32.005421  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:58:32.005451  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:58:32.005460  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:58:32.005496  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:58:32.005520  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:58:32.005541  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:58:32.005579  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:32.005605  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.005619  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.005631  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.005665  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:32.008694  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009085  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:32.009115  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009227  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:32.009422  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:32.009576  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:32.009716  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:32.082578  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:58:32.087536  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:58:32.098777  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:58:32.102888  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:58:32.112522  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:58:32.116400  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:58:32.126625  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:58:32.130706  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:58:32.141238  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:58:32.145206  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:58:32.154909  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:58:32.159011  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:58:32.169341  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:58:32.193388  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:58:32.215733  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:58:32.237995  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:58:32.260545  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 17:58:32.283295  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 17:58:32.305577  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:58:32.327963  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:58:32.350081  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:58:32.372344  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:58:32.394280  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:58:32.416064  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:58:32.431348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:58:32.446729  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:58:32.462348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:58:32.479908  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:58:32.495280  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:58:32.510638  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:58:32.526014  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:58:32.531514  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:58:32.541262  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545663  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545708  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.551139  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:58:32.561010  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:58:32.570960  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575030  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575086  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.580417  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:58:32.590088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:58:32.600566  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604834  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604876  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.610374  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:58:32.620430  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:58:32.624404  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:58:32.624460  548894 kubeadm.go:934] updating node {m02 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1008 17:58:32.624566  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:58:32.624597  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:58:32.624632  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:58:32.640207  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:58:32.640276  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:58:32.640318  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.651418  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:58:32.651482  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.660840  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:58:32.660867  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660925  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660955  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1008 17:58:32.660974  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1008 17:58:32.665332  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:58:32.665355  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:58:33.330557  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.330641  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.335582  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:58:33.335623  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:58:33.372522  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:58:33.392996  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.393114  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.400473  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:58:33.400509  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:58:33.862223  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:58:33.873974  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 17:58:33.890552  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:58:33.907049  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:58:33.923719  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:58:33.927643  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:33.940952  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:34.068619  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:34.085108  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:34.085464  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:34.085525  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:34.100590  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1008 17:58:34.101071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:34.101641  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:34.101663  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:34.101990  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:34.102197  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:34.102362  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:58:34.102466  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:58:34.102489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:34.105069  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105405  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:34.105432  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105659  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:34.105846  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:34.106036  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:34.106174  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:34.253303  548894 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:34.253365  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443"
	I1008 17:58:55.647352  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443": (21.393954296s)
	I1008 17:58:55.647399  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 17:58:56.179900  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m02 minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 17:58:56.351414  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 17:58:56.472891  548894 start.go:319] duration metric: took 22.370522266s to joinCluster
	I1008 17:58:56.472999  548894 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:56.473310  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:56.474358  548894 out.go:177] * Verifying Kubernetes components...
	I1008 17:58:56.475511  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:56.748460  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:56.780862  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:56.781184  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 17:58:56.781253  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 17:58:56.781476  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:58:56.781593  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:56.781601  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:56.781608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:56.781612  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:56.791092  548894 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1008 17:58:57.281764  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.281787  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.281795  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.281800  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.293233  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:58:57.782526  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.782566  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.782571  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.786781  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.281871  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.281899  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.281911  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.281917  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.285022  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:58:58.781938  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.781972  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.781983  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.781989  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.786159  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.786795  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:58:59.282562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.282596  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.282609  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.282619  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.286768  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:59.781827  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.781856  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.781867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.781872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.785211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:00.282380  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.282406  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.282417  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.282424  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.285358  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:00.782500  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.782529  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.782538  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.782541  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.785321  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.281680  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.281702  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.281711  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.281717  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.284371  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.285041  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:01.782411  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.782443  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.782453  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.782458  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.785485  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.282181  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.282203  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.282212  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.282217  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.285355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.782528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.782565  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.782571  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.785688  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.282604  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.282627  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.282638  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.282646  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.286199  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.286918  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:03.782407  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.782431  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.782441  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.782447  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.785212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:04.282369  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.282392  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.282400  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.282404  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.285540  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:04.781799  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.781818  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.781831  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.781835  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.785050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.282133  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.282156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.282163  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.282166  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.285211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.782060  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.782079  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.782090  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.782097  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.784932  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:05.785622  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:06.282491  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.282513  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.282521  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.282524  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.285446  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:06.782400  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.782424  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.782433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.782439  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.787263  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:07.282189  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.282221  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.282227  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.282231  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.285027  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:07.781864  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.781885  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.781895  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.781901  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.784237  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:08.281994  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.282014  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.282022  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.282027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.285398  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:08.286042  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:08.782428  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.782454  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.782466  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.782472  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.785709  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.282163  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.282193  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.282204  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.282211  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.285429  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.782392  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.782415  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.782423  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.782427  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.785404  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.282376  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.282398  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.282407  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.282410  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.293860  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:59:10.295059  548894 node_ready.go:49] node "ha-094095-m02" has status "Ready":"True"
	I1008 17:59:10.295090  548894 node_ready.go:38] duration metric: took 13.513574743s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:59:10.295105  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:10.295211  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:10.295228  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.295239  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.295243  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.309090  548894 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1008 17:59:10.317441  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.317556  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 17:59:10.317568  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.317578  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.317586  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.321472  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.322135  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.322156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.322167  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.322174  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.328845  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.329380  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.329405  548894 pod_ready.go:82] duration metric: took 11.930599ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329419  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329498  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 17:59:10.329509  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.329520  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.329528  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.336402  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.337294  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.337313  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.337323  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.337328  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.340848  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.341320  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.341341  548894 pod_ready.go:82] duration metric: took 11.909652ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341354  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341421  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 17:59:10.341432  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.341442  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.341450  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.343586  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.344175  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.344191  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.344198  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.344202  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.346350  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.347112  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.347134  548894 pod_ready.go:82] duration metric: took 5.772495ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347147  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347220  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 17:59:10.347231  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.347241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.347249  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.349293  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.349880  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.349897  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.349916  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.349921  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.352009  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.352470  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.352496  548894 pod_ready.go:82] duration metric: took 5.340167ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.352518  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.482865  548894 request.go:632] Waited for 130.276413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482957  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482968  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.482977  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.482983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.486050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.683204  548894 request.go:632] Waited for 196.383245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683286  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683291  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.683299  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.683302  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.686545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.687112  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.687134  548894 pod_ready.go:82] duration metric: took 334.609013ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.687145  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.882406  548894 request.go:632] Waited for 195.187252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882484  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882489  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.882498  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.882503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.885610  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.082756  548894 request.go:632] Waited for 196.397183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082846  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082857  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.082869  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.082874  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.085950  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.086623  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.086650  548894 pod_ready.go:82] duration metric: took 399.497445ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.086663  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.282438  548894 request.go:632] Waited for 195.669677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282535  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282544  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.282552  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.282557  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.285746  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.482936  548894 request.go:632] Waited for 196.360528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483014  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483021  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.483030  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.483037  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.486267  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.486823  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.486845  548894 pod_ready.go:82] duration metric: took 400.172946ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.486856  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.683063  548894 request.go:632] Waited for 196.099154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683155  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683168  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.683181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.683192  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.686310  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.882490  548894 request.go:632] Waited for 195.281424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882569  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.882580  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.882587  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.885732  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.886206  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.886228  548894 pod_ready.go:82] duration metric: took 399.364956ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.886243  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.083083  548894 request.go:632] Waited for 196.741087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083174  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083181  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.083193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.083199  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.086438  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.282815  548894 request.go:632] Waited for 195.357265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282879  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282884  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.282892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.282897  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.286211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.286955  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.286978  548894 pod_ready.go:82] duration metric: took 400.728245ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.286989  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.483080  548894 request.go:632] Waited for 196.002385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483159  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483167  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.483181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.483193  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.486235  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.683233  548894 request.go:632] Waited for 196.354052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683315  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683322  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.683334  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.683341  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.686419  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.687164  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.687194  548894 pod_ready.go:82] duration metric: took 400.198282ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.687210  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.883073  548894 request.go:632] Waited for 195.753943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883139  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883145  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.883152  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.883156  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.886291  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.083210  548894 request.go:632] Waited for 196.369192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083288  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083296  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.083304  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.083308  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.086479  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.087168  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.087188  548894 pod_ready.go:82] duration metric: took 399.968628ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.087198  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.283359  548894 request.go:632] Waited for 196.068525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283420  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283425  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.283433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.283438  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.286484  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.482457  548894 request.go:632] Waited for 195.25665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482575  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482588  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.482599  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.482605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.485671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.486395  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.486417  548894 pod_ready.go:82] duration metric: took 399.212171ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.486429  548894 pod_ready.go:39] duration metric: took 3.191309926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:13.486448  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 17:59:13.486516  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 17:59:13.501134  548894 api_server.go:72] duration metric: took 17.028092431s to wait for apiserver process to appear ...
	I1008 17:59:13.501165  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 17:59:13.501208  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 17:59:13.505717  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 17:59:13.506345  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 17:59:13.506369  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.506381  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.506389  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.508475  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:13.508579  548894 api_server.go:141] control plane version: v1.31.1
	I1008 17:59:13.508596  548894 api_server.go:131] duration metric: took 7.424538ms to wait for apiserver health ...
	I1008 17:59:13.508606  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 17:59:13.682454  548894 request.go:632] Waited for 173.762668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682527  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682532  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.682541  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.682546  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.687595  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 17:59:13.692646  548894 system_pods.go:59] 17 kube-system pods found
	I1008 17:59:13.692692  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:13.692702  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:13.692707  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:13.692713  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:13.692718  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:13.692723  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:13.692730  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:13.692735  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:13.692744  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:13.692750  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:13.692755  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:13.692760  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:13.692765  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:13.692774  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:13.692778  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:13.692783  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:13.692788  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:13.692796  548894 system_pods.go:74] duration metric: took 184.183414ms to wait for pod list to return data ...
	I1008 17:59:13.692811  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 17:59:13.883264  548894 request.go:632] Waited for 190.350103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883340  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883352  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.883364  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.883373  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.887200  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.887443  548894 default_sa.go:45] found service account: "default"
	I1008 17:59:13.887464  548894 default_sa.go:55] duration metric: took 194.642236ms for default service account to be created ...
	I1008 17:59:13.887473  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 17:59:14.083128  548894 request.go:632] Waited for 195.575348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083197  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083204  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.083215  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.083224  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.087502  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:14.091850  548894 system_pods.go:86] 17 kube-system pods found
	I1008 17:59:14.091874  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:14.091880  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:14.091884  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:14.091888  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:14.091895  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:14.091898  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:14.091903  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:14.091909  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:14.091915  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:14.091921  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:14.091929  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:14.091935  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:14.091943  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:14.091948  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:14.091954  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:14.091958  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:14.091961  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:14.091969  548894 system_pods.go:126] duration metric: took 204.490014ms to wait for k8s-apps to be running ...
	I1008 17:59:14.091978  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 17:59:14.092031  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:14.107751  548894 system_svc.go:56] duration metric: took 15.765669ms WaitForService to wait for kubelet
	I1008 17:59:14.107782  548894 kubeadm.go:582] duration metric: took 17.634744099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:59:14.107804  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 17:59:14.283342  548894 request.go:632] Waited for 175.43028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283397  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283402  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.283410  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.283415  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.286910  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:14.287827  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287854  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287877  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287883  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287892  548894 node_conditions.go:105] duration metric: took 180.082842ms to run NodePressure ...
	I1008 17:59:14.287908  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:59:14.287939  548894 start.go:255] writing updated cluster config ...
	I1008 17:59:14.289665  548894 out.go:201] 
	I1008 17:59:14.290934  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:14.291033  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.292598  548894 out.go:177] * Starting "ha-094095-m03" control-plane node in "ha-094095" cluster
	I1008 17:59:14.293602  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:59:14.293620  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:59:14.293722  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:59:14.293741  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:59:14.293865  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.294036  548894 start.go:360] acquireMachinesLock for ha-094095-m03: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:59:14.294084  548894 start.go:364] duration metric: took 28.442µs to acquireMachinesLock for "ha-094095-m03"
	I1008 17:59:14.294116  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:14.294207  548894 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1008 17:59:14.295495  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:59:14.295567  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:14.295608  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:14.310848  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I1008 17:59:14.311356  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:14.311872  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:14.311899  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:14.312212  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:14.312396  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:14.312674  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:14.312844  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:59:14.312876  548894 client.go:168] LocalClient.Create starting
	I1008 17:59:14.312902  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:59:14.312934  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.312948  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313000  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:59:14.313019  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.313027  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313042  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:59:14.313050  548894 main.go:141] libmachine: (ha-094095-m03) Calling .PreCreateCheck
	I1008 17:59:14.313206  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:14.313583  548894 main.go:141] libmachine: Creating machine...
	I1008 17:59:14.313600  548894 main.go:141] libmachine: (ha-094095-m03) Calling .Create
	I1008 17:59:14.313739  548894 main.go:141] libmachine: (ha-094095-m03) Creating KVM machine...
	I1008 17:59:14.314906  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing default KVM network
	I1008 17:59:14.315074  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing private KVM network mk-ha-094095
	I1008 17:59:14.315221  548894 main.go:141] libmachine: (ha-094095-m03) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.315247  548894 main.go:141] libmachine: (ha-094095-m03) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:59:14.315327  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.315217  549655 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.315388  548894 main.go:141] libmachine: (ha-094095-m03) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:59:14.593209  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.593087  549655 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa...
	I1008 17:59:14.821442  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821329  549655 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk...
	I1008 17:59:14.821476  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing magic tar header
	I1008 17:59:14.821491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing SSH key tar header
	I1008 17:59:14.821502  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821478  549655 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.821659  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03
	I1008 17:59:14.821694  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 (perms=drwx------)
	I1008 17:59:14.821705  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:59:14.821719  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.821729  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:59:14.821740  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:59:14.821750  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:59:14.821762  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:59:14.821772  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home
	I1008 17:59:14.821784  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:59:14.821794  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Skipping /home - not owner
	I1008 17:59:14.821808  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:59:14.821819  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:59:14.821836  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:59:14.821846  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:14.822739  548894 main.go:141] libmachine: (ha-094095-m03) define libvirt domain using xml: 
	I1008 17:59:14.822758  548894 main.go:141] libmachine: (ha-094095-m03) <domain type='kvm'>
	I1008 17:59:14.822767  548894 main.go:141] libmachine: (ha-094095-m03)   <name>ha-094095-m03</name>
	I1008 17:59:14.822774  548894 main.go:141] libmachine: (ha-094095-m03)   <memory unit='MiB'>2200</memory>
	I1008 17:59:14.822782  548894 main.go:141] libmachine: (ha-094095-m03)   <vcpu>2</vcpu>
	I1008 17:59:14.822792  548894 main.go:141] libmachine: (ha-094095-m03)   <features>
	I1008 17:59:14.822799  548894 main.go:141] libmachine: (ha-094095-m03)     <acpi/>
	I1008 17:59:14.822805  548894 main.go:141] libmachine: (ha-094095-m03)     <apic/>
	I1008 17:59:14.822815  548894 main.go:141] libmachine: (ha-094095-m03)     <pae/>
	I1008 17:59:14.822822  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.822827  548894 main.go:141] libmachine: (ha-094095-m03)   </features>
	I1008 17:59:14.822834  548894 main.go:141] libmachine: (ha-094095-m03)   <cpu mode='host-passthrough'>
	I1008 17:59:14.822838  548894 main.go:141] libmachine: (ha-094095-m03)   
	I1008 17:59:14.822842  548894 main.go:141] libmachine: (ha-094095-m03)   </cpu>
	I1008 17:59:14.822847  548894 main.go:141] libmachine: (ha-094095-m03)   <os>
	I1008 17:59:14.822857  548894 main.go:141] libmachine: (ha-094095-m03)     <type>hvm</type>
	I1008 17:59:14.822865  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='cdrom'/>
	I1008 17:59:14.822879  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='hd'/>
	I1008 17:59:14.822888  548894 main.go:141] libmachine: (ha-094095-m03)     <bootmenu enable='no'/>
	I1008 17:59:14.822897  548894 main.go:141] libmachine: (ha-094095-m03)   </os>
	I1008 17:59:14.822903  548894 main.go:141] libmachine: (ha-094095-m03)   <devices>
	I1008 17:59:14.822910  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='cdrom'>
	I1008 17:59:14.822919  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/boot2docker.iso'/>
	I1008 17:59:14.822926  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hdc' bus='scsi'/>
	I1008 17:59:14.822931  548894 main.go:141] libmachine: (ha-094095-m03)       <readonly/>
	I1008 17:59:14.822939  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.822951  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='disk'>
	I1008 17:59:14.822984  548894 main.go:141] libmachine: (ha-094095-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:59:14.822998  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk'/>
	I1008 17:59:14.823004  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hda' bus='virtio'/>
	I1008 17:59:14.823008  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.823012  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823018  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='mk-ha-094095'/>
	I1008 17:59:14.823028  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823037  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823050  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823062  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='default'/>
	I1008 17:59:14.823072  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823080  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823089  548894 main.go:141] libmachine: (ha-094095-m03)     <serial type='pty'>
	I1008 17:59:14.823097  548894 main.go:141] libmachine: (ha-094095-m03)       <target port='0'/>
	I1008 17:59:14.823105  548894 main.go:141] libmachine: (ha-094095-m03)     </serial>
	I1008 17:59:14.823114  548894 main.go:141] libmachine: (ha-094095-m03)     <console type='pty'>
	I1008 17:59:14.823128  548894 main.go:141] libmachine: (ha-094095-m03)       <target type='serial' port='0'/>
	I1008 17:59:14.823139  548894 main.go:141] libmachine: (ha-094095-m03)     </console>
	I1008 17:59:14.823147  548894 main.go:141] libmachine: (ha-094095-m03)     <rng model='virtio'>
	I1008 17:59:14.823159  548894 main.go:141] libmachine: (ha-094095-m03)       <backend model='random'>/dev/random</backend>
	I1008 17:59:14.823166  548894 main.go:141] libmachine: (ha-094095-m03)     </rng>
	I1008 17:59:14.823173  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823181  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823189  548894 main.go:141] libmachine: (ha-094095-m03)   </devices>
	I1008 17:59:14.823202  548894 main.go:141] libmachine: (ha-094095-m03) </domain>
	I1008 17:59:14.823214  548894 main.go:141] libmachine: (ha-094095-m03) 
	I1008 17:59:14.829896  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:d4:34:b1 in network default
	I1008 17:59:14.830619  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:14.830642  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring networks are active...
	I1008 17:59:14.831385  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network default is active
	I1008 17:59:14.831784  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network mk-ha-094095 is active
	I1008 17:59:14.832205  548894 main.go:141] libmachine: (ha-094095-m03) Getting domain xml...
	I1008 17:59:14.832929  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:16.039421  548894 main.go:141] libmachine: (ha-094095-m03) Waiting to get IP...
	I1008 17:59:16.040212  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.040604  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.040627  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.040576  549655 retry.go:31] will retry after 310.617511ms: waiting for machine to come up
	I1008 17:59:16.353098  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.353638  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.353666  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.353600  549655 retry.go:31] will retry after 370.013025ms: waiting for machine to come up
	I1008 17:59:16.725039  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.725471  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.725511  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.725419  549655 retry.go:31] will retry after 335.057817ms: waiting for machine to come up
	I1008 17:59:17.061762  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.062145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.062168  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.062095  549655 retry.go:31] will retry after 553.959397ms: waiting for machine to come up
	I1008 17:59:17.617869  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.618404  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.618431  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.618345  549655 retry.go:31] will retry after 506.335647ms: waiting for machine to come up
	I1008 17:59:18.125977  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.126353  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.126384  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.126291  549655 retry.go:31] will retry after 734.408354ms: waiting for machine to come up
	I1008 17:59:18.862107  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.862605  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.862632  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.862544  549655 retry.go:31] will retry after 1.020122482s: waiting for machine to come up
	I1008 17:59:19.884038  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:19.884492  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:19.884530  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:19.884425  549655 retry.go:31] will retry after 1.125801014s: waiting for machine to come up
	I1008 17:59:21.011532  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:21.011993  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:21.012020  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:21.011944  549655 retry.go:31] will retry after 1.660141079s: waiting for machine to come up
	I1008 17:59:22.673143  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:22.673540  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:22.673570  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:22.673522  549655 retry.go:31] will retry after 1.579793422s: waiting for machine to come up
	I1008 17:59:24.255498  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:24.256062  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:24.256089  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:24.256014  549655 retry.go:31] will retry after 2.586780396s: waiting for machine to come up
	I1008 17:59:26.845780  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:26.846232  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:26.846256  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:26.846181  549655 retry.go:31] will retry after 2.461770006s: waiting for machine to come up
	I1008 17:59:29.309639  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:29.310146  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:29.310176  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:29.310088  549655 retry.go:31] will retry after 4.519355473s: waiting for machine to come up
	I1008 17:59:33.833985  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:33.834361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:33.834386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:33.834293  549655 retry.go:31] will retry after 3.493644498s: waiting for machine to come up
	I1008 17:59:37.331421  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.331914  548894 main.go:141] libmachine: (ha-094095-m03) Found IP for machine: 192.168.39.194
	I1008 17:59:37.331939  548894 main.go:141] libmachine: (ha-094095-m03) Reserving static IP address...
	I1008 17:59:37.331956  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has current primary IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.332395  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find host DHCP lease matching {name: "ha-094095-m03", mac: "52:54:00:e6:8f:e3", ip: "192.168.39.194"} in network mk-ha-094095
	I1008 17:59:37.404136  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Getting to WaitForSSH function...
	I1008 17:59:37.404175  548894 main.go:141] libmachine: (ha-094095-m03) Reserved static IP address: 192.168.39.194
	I1008 17:59:37.404188  548894 main.go:141] libmachine: (ha-094095-m03) Waiting for SSH to be available...
	I1008 17:59:37.406755  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407114  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.407145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407257  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH client type: external
	I1008 17:59:37.407295  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa (-rw-------)
	I1008 17:59:37.407348  548894 main.go:141] libmachine: (ha-094095-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:59:37.407377  548894 main.go:141] libmachine: (ha-094095-m03) DBG | About to run SSH command:
	I1008 17:59:37.407391  548894 main.go:141] libmachine: (ha-094095-m03) DBG | exit 0
	I1008 17:59:37.534234  548894 main.go:141] libmachine: (ha-094095-m03) DBG | SSH cmd err, output: <nil>: 
	I1008 17:59:37.534542  548894 main.go:141] libmachine: (ha-094095-m03) KVM machine creation complete!
	I1008 17:59:37.535062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:37.535615  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.535835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.536043  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:59:37.536062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetState
	I1008 17:59:37.537459  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:59:37.537477  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:59:37.537484  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:59:37.537492  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.539962  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540458  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.540491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540661  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.540847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.540985  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.541188  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.541386  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.541674  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.541690  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:59:37.649416  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:37.649443  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:59:37.649452  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.652360  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652754  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.652783  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652904  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.653099  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653253  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653372  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.653521  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.653691  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.653700  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:59:37.763719  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:59:37.763801  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:59:37.763820  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:59:37.763835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764121  548894 buildroot.go:166] provisioning hostname "ha-094095-m03"
	I1008 17:59:37.764156  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764347  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.766798  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.767194  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.767617  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767784  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767982  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.768161  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.768362  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.768381  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m03 && echo "ha-094095-m03" | sudo tee /etc/hostname
	I1008 17:59:37.892598  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m03
	
	I1008 17:59:37.892638  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.895717  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896104  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.896139  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896357  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.896582  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896764  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896930  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.897130  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.897346  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.897371  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:59:38.015892  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:38.015942  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:59:38.015964  548894 buildroot.go:174] setting up certificates
	I1008 17:59:38.015976  548894 provision.go:84] configureAuth start
	I1008 17:59:38.015994  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:38.016285  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.018925  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019329  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.019361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019480  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.021681  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022085  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.022109  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022295  548894 provision.go:143] copyHostCerts
	I1008 17:59:38.022355  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022398  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:59:38.022410  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022497  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:59:38.022612  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022639  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:59:38.022646  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022684  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:59:38.022749  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022772  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:59:38.022780  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022817  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:59:38.022905  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m03 san=[127.0.0.1 192.168.39.194 ha-094095-m03 localhost minikube]
	I1008 17:59:38.409825  548894 provision.go:177] copyRemoteCerts
	I1008 17:59:38.409880  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:59:38.409906  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.412474  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.412819  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.412850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.413057  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.413233  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.413436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.413614  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.500707  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:59:38.500793  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:59:38.526942  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:59:38.527009  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:59:38.552205  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:59:38.552273  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 17:59:38.575397  548894 provision.go:87] duration metric: took 559.401387ms to configureAuth
	I1008 17:59:38.575426  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:59:38.575799  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:38.575895  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.579241  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579746  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.579778  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579962  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.580162  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580375  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580557  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.580756  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.580976  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.581001  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:59:38.814916  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:59:38.814943  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:59:38.814951  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetURL
	I1008 17:59:38.816195  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using libvirt version 6000000
	I1008 17:59:38.818782  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.819181  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819313  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:59:38.819324  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:59:38.819331  548894 client.go:171] duration metric: took 24.506447945s to LocalClient.Create
	I1008 17:59:38.819354  548894 start.go:167] duration metric: took 24.506513664s to libmachine.API.Create "ha-094095"
	I1008 17:59:38.819366  548894 start.go:293] postStartSetup for "ha-094095-m03" (driver="kvm2")
	I1008 17:59:38.819379  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:59:38.819402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:38.819667  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:59:38.819695  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.822386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.822850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.822878  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.823079  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.823255  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.823425  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.823576  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.911016  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:59:38.915516  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:59:38.915544  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:59:38.915616  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:59:38.915703  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:59:38.915717  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:59:38.915843  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:59:38.927016  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:38.951613  548894 start.go:296] duration metric: took 132.232716ms for postStartSetup
	I1008 17:59:38.951663  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:38.952254  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.954773  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955177  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.955206  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955479  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:38.955726  548894 start.go:128] duration metric: took 24.661507137s to createHost
	I1008 17:59:38.955754  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.957824  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958152  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.958180  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958260  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.958436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958614  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958783  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.958982  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.959149  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.959198  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:59:39.066802  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410379.042145365
	
	I1008 17:59:39.066831  548894 fix.go:216] guest clock: 1728410379.042145365
	I1008 17:59:39.066838  548894 fix.go:229] Guest: 2024-10-08 17:59:39.042145365 +0000 UTC Remote: 2024-10-08 17:59:38.955741605 +0000 UTC m=+140.046701810 (delta=86.40376ms)
	I1008 17:59:39.066854  548894 fix.go:200] guest clock delta is within tolerance: 86.40376ms
	I1008 17:59:39.066859  548894 start.go:83] releasing machines lock for "ha-094095-m03", held for 24.772764688s
	I1008 17:59:39.066879  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.067121  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:39.069711  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.070086  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.070113  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.072386  548894 out.go:177] * Found network options:
	I1008 17:59:39.073842  548894 out.go:177]   - NO_PROXY=192.168.39.99,192.168.39.65
	W1008 17:59:39.075265  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.075288  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.075301  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.075811  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076009  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076099  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:59:39.076150  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	W1008 17:59:39.076202  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.076228  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.076306  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:59:39.076328  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:39.078554  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.078807  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079018  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079043  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079229  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079324  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079350  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079420  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.079542  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079593  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.079786  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.079847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.080000  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.080138  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.318698  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:59:39.324927  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:59:39.324990  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:59:39.343637  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:59:39.343660  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:59:39.343717  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:59:39.360309  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:59:39.373825  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:59:39.373881  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:59:39.387260  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:59:39.400202  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:59:39.520831  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:59:39.680675  548894 docker.go:233] disabling docker service ...
	I1008 17:59:39.680761  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:59:39.695394  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:59:39.710367  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:59:39.839252  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:59:39.972794  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:59:39.988321  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:59:40.006947  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:59:40.007031  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.018072  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:59:40.018137  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.029758  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.040612  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.051467  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:59:40.062960  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.074528  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.091933  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.101742  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:59:40.111189  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:59:40.111232  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:59:40.123431  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:59:40.132781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:40.256434  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:59:40.349829  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:59:40.349903  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:59:40.354785  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:59:40.354842  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:59:40.358519  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:59:40.397714  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:59:40.397812  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.425086  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.452883  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:59:40.454244  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:59:40.455477  548894 out.go:177]   - env NO_PROXY=192.168.39.99,192.168.39.65
	I1008 17:59:40.456757  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:40.459422  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.459818  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:40.459840  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.460096  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:59:40.464498  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:40.479877  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:59:40.480107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:40.480402  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.480441  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.495933  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I1008 17:59:40.496453  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.496925  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.496949  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.497271  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.497471  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:59:40.499057  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:40.499430  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.499465  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.513547  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I1008 17:59:40.514005  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.514450  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.514473  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.514842  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.515015  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:40.515189  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.194
	I1008 17:59:40.515202  548894 certs.go:194] generating shared ca certs ...
	I1008 17:59:40.515221  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.515367  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:59:40.515423  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:59:40.515435  548894 certs.go:256] generating profile certs ...
	I1008 17:59:40.515545  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:59:40.515578  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d
	I1008 17:59:40.515597  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 17:59:40.734889  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d ...
	I1008 17:59:40.734923  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d: {Name:mkaac2d16400496ba6ef1c81a4206e8cf0480e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735091  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d ...
	I1008 17:59:40.735104  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d: {Name:mk3a55a29959b59f407eb97877f8ee016f652037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735177  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:59:40.735309  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:59:40.735433  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:59:40.735451  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:59:40.735464  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:59:40.735479  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:59:40.735491  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:59:40.735503  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:59:40.735514  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:59:40.735528  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:59:40.750415  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:59:40.750523  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:59:40.750564  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:59:40.750576  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:59:40.750597  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:59:40.750620  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:59:40.750642  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:59:40.750679  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:40.750709  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:40.750727  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:59:40.750739  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:59:40.750776  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:40.754187  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754657  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:40.754682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754891  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:40.755083  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:40.755214  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:40.755357  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:40.826678  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:59:40.831630  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:59:40.843594  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:59:40.848493  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:59:40.859904  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:59:40.864097  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:59:40.874362  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:59:40.878501  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:59:40.890535  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:59:40.895442  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:59:40.907886  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:59:40.911759  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:59:40.921878  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:59:40.947644  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:59:40.970914  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:59:40.993912  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:59:41.017348  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1008 17:59:41.040662  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:59:41.063411  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:59:41.086440  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:59:41.109681  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:59:41.132484  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:59:41.156226  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:59:41.178867  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:59:41.195488  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:59:41.212613  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:59:41.228807  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:59:41.246244  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:59:41.262224  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:59:41.277985  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:59:41.294525  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:59:41.300038  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:59:41.311084  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315442  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315488  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.321163  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:59:41.332088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:59:41.342926  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347780  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347833  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.353198  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:59:41.363300  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:59:41.373282  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377636  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377682  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.383451  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:59:41.393738  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:59:41.397604  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:59:41.397660  548894 kubeadm.go:934] updating node {m03 192.168.39.194 8443 v1.31.1 crio true true} ...
	I1008 17:59:41.397755  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:59:41.397799  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:59:41.397831  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:59:41.412820  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:59:41.412901  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:59:41.412955  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.422366  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:59:41.422410  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.431355  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:59:41.431384  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431397  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1008 17:59:41.431416  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431363  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431494  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:41.446391  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.446418  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:59:41.446444  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:59:41.446446  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:59:41.446463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:59:41.447018  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.480884  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:59:41.480970  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:59:42.313012  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:59:42.322438  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1008 17:59:42.338702  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:59:42.365144  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:59:42.382514  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:59:42.386113  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:42.397995  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:42.523088  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:59:42.540754  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:42.541257  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:42.541326  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:42.559172  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I1008 17:59:42.559678  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:42.560333  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:42.560360  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:42.560754  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:42.560977  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:42.561148  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:59:42.561320  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:59:42.561345  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:42.564781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565346  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:42.565377  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565645  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:42.565831  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:42.566030  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:42.566199  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:42.729842  548894 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:42.729907  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443"
	I1008 18:00:04.832594  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443": (22.102635583s)
	I1008 18:00:04.832637  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 18:00:05.279641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m03 minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 18:00:05.406989  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 18:00:05.528741  548894 start.go:319] duration metric: took 22.967581062s to joinCluster
	I1008 18:00:05.528848  548894 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:00:05.529236  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:00:05.530083  548894 out.go:177] * Verifying Kubernetes components...
	I1008 18:00:05.531162  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:00:05.714521  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:00:05.729813  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:00:05.730150  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 18:00:05.730231  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 18:00:05.730539  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:05.730633  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:05.730651  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:05.730664  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:05.730673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:05.734671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.231617  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.231641  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.231650  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.231655  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.234903  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.731584  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.731606  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.731615  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.731620  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.735426  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.231620  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.231630  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.231634  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.235355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.730822  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.730855  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.730867  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.730873  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.735340  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:07.736449  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:08.230853  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.230878  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.230887  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.230892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.234386  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:08.731681  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.731712  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.731722  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.731727  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.735243  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.231587  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.231609  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.231618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.231623  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.235294  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.731675  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.731700  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.731709  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.731713  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.735299  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.231249  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.231335  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.231353  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.231359  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.234866  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.235558  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:10.731835  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.731862  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.731876  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.731881  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.735185  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.231623  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.231632  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.231636  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.235238  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.731791  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.731826  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.731839  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.731845  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.735179  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.231312  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.231339  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.231350  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.231356  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.234779  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.235754  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:12.731629  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.731658  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.731669  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.731673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.735274  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.231468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.231492  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.231500  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.231503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.234905  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.731604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.731613  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.731618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.734788  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.231250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.231274  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.231282  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.231287  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.234694  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.731084  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.731109  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.731117  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.731121  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.735096  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.735874  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:15.231041  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.231070  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.231079  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.231083  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.234482  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:15.731250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.731276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.731288  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.731296  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.734547  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.230897  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.230919  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.230928  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.230937  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.234261  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.731599  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.731608  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.731612  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.735249  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.736046  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:17.231278  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.231302  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.231311  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.231316  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.234212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:17.731562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.731585  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.731594  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.731597  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.735391  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.231528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.231552  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.231561  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.231565  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.234777  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.731570  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.731593  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.731601  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.731608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.735359  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.736085  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:19.231579  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.231604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.231618  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.231622  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.234902  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:19.731112  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.731142  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.731155  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.731162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.734221  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.231563  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.231591  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.231600  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.231605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.234855  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.731738  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.731773  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.731785  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.731792  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.735486  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.231659  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.231685  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.231696  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.231705  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.234967  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.235427  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:21.730803  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.730829  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.730838  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.730843  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.734021  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.231586  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.231613  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.231624  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.231630  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.234981  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.731022  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.731056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.731064  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.731070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.734252  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.231192  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.231215  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.231223  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.231228  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.234975  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.235794  548894 node_ready.go:49] node "ha-094095-m03" has status "Ready":"True"
	I1008 18:00:23.235816  548894 node_ready.go:38] duration metric: took 17.50525839s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:23.235826  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:23.235893  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:23.235903  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.235914  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.235918  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.241231  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:23.248355  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.248435  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 18:00:23.248444  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.248452  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.248456  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.250946  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.251489  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.251502  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.251510  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.251515  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.253741  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.254169  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.254188  548894 pod_ready.go:82] duration metric: took 5.808287ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254199  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254280  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 18:00:23.254291  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.254300  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.254309  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.256714  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.257261  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.257276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.257283  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.257286  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.259498  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.260042  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.260061  548894 pod_ready.go:82] duration metric: took 5.850763ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260072  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 18:00:23.260143  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.260153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.260162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.262300  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.262973  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.262989  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.262999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.263005  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.265000  548894 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1008 18:00:23.265522  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.265544  548894 pod_ready.go:82] duration metric: took 5.464426ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265555  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265622  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 18:00:23.265634  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.265643  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.265648  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.267966  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.268468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:23.268479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.268486  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.268491  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.270736  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.271272  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.271290  548894 pod_ready.go:82] duration metric: took 5.727216ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.271300  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.431729  548894 request.go:632] Waited for 160.342792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431825  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431837  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.431850  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.431861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.438271  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:23.631298  548894 request.go:632] Waited for 192.164013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631383  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631391  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.631408  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.631433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.635040  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.635580  548894 pod_ready.go:93] pod "etcd-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.635599  548894 pod_ready.go:82] duration metric: took 364.291447ms for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.635618  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.831837  548894 request.go:632] Waited for 196.121278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831896  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831902  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.831909  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.831913  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.834801  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.031893  548894 request.go:632] Waited for 196.106655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031976  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031981  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.031989  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.031993  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.035406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.036144  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.036163  548894 pod_ready.go:82] duration metric: took 400.535944ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.036173  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.232096  548894 request.go:632] Waited for 195.798323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232173  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232180  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.232192  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.232201  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.235054  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.432054  548894 request.go:632] Waited for 196.298402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432116  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432121  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.432128  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.432132  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.435456  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.436205  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.436233  548894 pod_ready.go:82] duration metric: took 400.05192ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.436253  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.631271  548894 request.go:632] Waited for 194.926969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631366  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631374  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.631384  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.631390  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.635001  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.831928  548894 request.go:632] Waited for 195.938579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832009  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832015  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.832023  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.832027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.834879  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.835519  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.835541  548894 pod_ready.go:82] duration metric: took 399.279605ms for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.835556  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.031600  548894 request.go:632] Waited for 195.955469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031671  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031676  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.031684  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.031689  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.035187  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.231262  548894 request.go:632] Waited for 195.293412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231320  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231326  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.231339  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.231343  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.234515  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.235363  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.235391  548894 pod_ready.go:82] duration metric: took 399.824349ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.235422  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.431278  548894 request.go:632] Waited for 195.760337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431347  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431353  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.431375  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.431379  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.434406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.631990  548894 request.go:632] Waited for 196.659604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632053  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632058  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.632067  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.632070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.635545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.636227  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.636248  548894 pod_ready.go:82] duration metric: took 400.813116ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.636259  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.831790  548894 request.go:632] Waited for 195.428011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831873  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831885  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.831896  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.831903  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.835520  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.031847  548894 request.go:632] Waited for 195.394713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031926  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031931  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.031939  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.031943  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.034885  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:26.035588  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.035611  548894 pod_ready.go:82] duration metric: took 399.345696ms for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.035622  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.231657  548894 request.go:632] Waited for 195.935325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231715  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231720  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.231728  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.231732  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.234989  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.432143  548894 request.go:632] Waited for 196.401893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432242  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432253  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.432262  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.432270  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.435436  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.436096  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.436113  548894 pod_ready.go:82] duration metric: took 400.484447ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.436124  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.632222  548894 request.go:632] Waited for 196.022184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632309  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632317  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.632325  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.632332  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.636157  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.831362  548894 request.go:632] Waited for 194.278962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831419  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831424  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.831433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.831445  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.834670  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.835262  548894 pod_ready.go:93] pod "kube-proxy-krxss" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.835280  548894 pod_ready.go:82] duration metric: took 399.149562ms for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.835292  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.031407  548894 request.go:632] Waited for 196.014244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031471  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.031490  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.031499  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.034651  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.231683  548894 request.go:632] Waited for 196.28215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231743  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231750  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.231761  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.231766  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.234677  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:27.235361  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.235391  548894 pod_ready.go:82] duration metric: took 400.091229ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.235405  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.431237  548894 request.go:632] Waited for 195.72193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431329  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431337  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.431353  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.431360  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.434428  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.631604  548894 request.go:632] Waited for 196.391274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631664  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631669  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.631678  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.631683  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.635129  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.635990  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.636017  548894 pod_ready.go:82] duration metric: took 400.603779ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.636029  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.832057  548894 request.go:632] Waited for 195.932393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832129  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832137  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.832147  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.832152  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.835638  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.031786  548894 request.go:632] Waited for 195.242001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031845  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031850  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.031857  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.031861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.035281  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.035945  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.035968  548894 pod_ready.go:82] duration metric: took 399.926983ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.035978  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.232045  548894 request.go:632] Waited for 195.987112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232140  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.232148  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.232153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.235683  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.431773  548894 request.go:632] Waited for 195.354282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431855  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431860  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.431867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.431872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.435214  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.435815  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.435951  548894 pod_ready.go:82] duration metric: took 399.956305ms for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.435993  548894 pod_ready.go:39] duration metric: took 5.200153143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:28.436017  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:00:28.436094  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:00:28.452375  548894 api_server.go:72] duration metric: took 22.923490341s to wait for apiserver process to appear ...
	I1008 18:00:28.452398  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:00:28.452421  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 18:00:28.456918  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 18:00:28.456978  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 18:00:28.456986  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.456994  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.456999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.457742  548894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1008 18:00:28.457798  548894 api_server.go:141] control plane version: v1.31.1
	I1008 18:00:28.457809  548894 api_server.go:131] duration metric: took 5.40508ms to wait for apiserver health ...
	I1008 18:00:28.457822  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:00:28.632286  548894 request.go:632] Waited for 174.373411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632364  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632372  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.632382  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.632388  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.638836  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:28.647332  548894 system_pods.go:59] 24 kube-system pods found
	I1008 18:00:28.647367  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:28.647374  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:28.647379  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:28.647384  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:28.647389  548894 system_pods.go:61] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:28.647394  548894 system_pods.go:61] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:28.647399  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:28.647404  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:28.647409  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:28.647417  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:28.647426  548894 system_pods.go:61] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:28.647432  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:28.647439  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:28.647445  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:28.647451  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:28.647456  548894 system_pods.go:61] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:28.647463  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:28.647468  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:28.647476  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:28.647482  548894 system_pods.go:61] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:28.647489  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:28.647494  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:28.647499  548894 system_pods.go:61] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:28.647505  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:28.647514  548894 system_pods.go:74] duration metric: took 189.683627ms to wait for pod list to return data ...
	I1008 18:00:28.647529  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:00:28.831958  548894 request.go:632] Waited for 184.329764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832044  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.832067  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.832073  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.837077  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:28.837234  548894 default_sa.go:45] found service account: "default"
	I1008 18:00:28.837253  548894 default_sa.go:55] duration metric: took 189.716305ms for default service account to be created ...
	I1008 18:00:28.837265  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:00:29.031904  548894 request.go:632] Waited for 194.536031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031965  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031970  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.031979  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.031983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.037622  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:29.044999  548894 system_pods.go:86] 24 kube-system pods found
	I1008 18:00:29.045026  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:29.045032  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:29.045036  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:29.045039  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:29.045043  548894 system_pods.go:89] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:29.045046  548894 system_pods.go:89] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:29.045050  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:29.045053  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:29.045056  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:29.045059  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:29.045063  548894 system_pods.go:89] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:29.045066  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:29.045070  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:29.045076  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:29.045082  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:29.045086  548894 system_pods.go:89] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:29.045089  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:29.045093  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:29.045098  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:29.045104  548894 system_pods.go:89] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:29.045107  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:29.045111  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:29.045114  548894 system_pods.go:89] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:29.045117  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:29.045124  548894 system_pods.go:126] duration metric: took 207.850736ms to wait for k8s-apps to be running ...
	I1008 18:00:29.045133  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:00:29.045176  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:00:29.059678  548894 system_svc.go:56] duration metric: took 14.536958ms WaitForService to wait for kubelet
	I1008 18:00:29.059706  548894 kubeadm.go:582] duration metric: took 23.530822988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:00:29.059724  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:00:29.231880  548894 request.go:632] Waited for 172.048672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231961  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231966  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.231974  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.231981  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.238241  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:29.239300  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239332  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239347  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239353  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239361  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239366  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239371  548894 node_conditions.go:105] duration metric: took 179.642781ms to run NodePressure ...
	I1008 18:00:29.239392  548894 start.go:241] waiting for startup goroutines ...
	I1008 18:00:29.239417  548894 start.go:255] writing updated cluster config ...
	I1008 18:00:29.239708  548894 ssh_runner.go:195] Run: rm -f paused
	I1008 18:00:29.291443  548894 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:00:29.293244  548894 out.go:177] * Done! kubectl is now configured to use "ha-094095" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.167021669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4252cfef-b062-4531-afbc-2cd5506eae2f name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.168834563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e175f64-b830-416e-bfa4-1b04b7f2aa2a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.169449204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410654169365931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e175f64-b830-416e-bfa4-1b04b7f2aa2a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.170179716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b0bf263-27d5-4ef0-abc6-5e226949c800 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.170233689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b0bf263-27d5-4ef0-abc6-5e226949c800 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.171086531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b0bf263-27d5-4ef0-abc6-5e226949c800 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.210223721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55c158c8-a100-42d8-87d9-b622e5d5d50b name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.210304212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55c158c8-a100-42d8-87d9-b622e5d5d50b name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.211700823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cb6241a-fc87-4aae-95c3-da67e2ce8890 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.212085832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410654212065549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cb6241a-fc87-4aae-95c3-da67e2ce8890 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.212731519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc2f066b-f952-4baa-b636-89af92e81a4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.212802392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc2f066b-f952-4baa-b636-89af92e81a4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.213024945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc2f066b-f952-4baa-b636-89af92e81a4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.222797737Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76635f7c-2932-42b6-8344-a6b6e7974941 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.223022855Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-n779r,Uid:d3a10d4a-6add-4642-961b-b7b00f9e363b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410431779985652,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T18:00:30.266893198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6c7xl,Uid:5be15582-d4c7-4ec3-95db-7f9b7db4280d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728410297358103747,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T17:58:17.031751608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghz9x,Uid:a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410297357205428,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-10-08T17:58:17.036351692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54520f81-08fe-4612-bef9-1fe0016c45ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410297355597197,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-08T17:58:17.037337141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&PodSandboxMetadata{Name:kube-proxy-gnmch,Uid:2e4ec0ad-049b-48e6-90b2-8b8430d821f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410284807011649,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-08T17:58:03.897237361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&PodSandboxMetadata{Name:kindnet-mclfx,Uid:fca2ce96-9193-48a5-9dc7-9d20bde6787f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410284802925523,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T17:58:03.882142734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-094095,Uid:4ab63a85f4abc9ded81a3460d92ef212,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728410273569368635,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.99:8443,kubernetes.io/config.hash: 4ab63a85f4abc9ded81a3460d92ef212,kubernetes.io/config.seen: 2024-10-08T17:57:53.083050125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-094095,Uid:19b7e8dee4daa510f3f23034617cd71c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273552850399,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4da
a510f3f23034617cd71c,},Annotations:map[string]string{kubernetes.io/config.hash: 19b7e8dee4daa510f3f23034617cd71c,kubernetes.io/config.seen: 2024-10-08T17:57:53.083055839Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&PodSandboxMetadata{Name:etcd-ha-094095,Uid:22ef4792d58f06f8319e0939993449f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273547684723,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.99:2379,kubernetes.io/config.hash: 22ef4792d58f06f8319e0939993449f9,kubernetes.io/config.seen: 2024-10-08T17:57:53.083056812Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f021979b9e57f9b85a8710
325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-094095,Uid:2762c7155c0d46d981fd81220017a92c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273536917657,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2762c7155c0d46d981fd81220017a92c,kubernetes.io/config.seen: 2024-10-08T17:57:53.083054587Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-094095,Uid:87f977c77bded84c5cd8640a7d7c6034,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273535142157,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.con
tainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87f977c77bded84c5cd8640a7d7c6034,kubernetes.io/config.seen: 2024-10-08T17:57:53.083053476Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=76635f7c-2932-42b6-8344-a6b6e7974941 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.223727933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=598cac09-766e-47f3-85ee-2d147c5f8ac6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.223778551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=598cac09-766e-47f3-85ee-2d147c5f8ac6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.224001686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=598cac09-766e-47f3-85ee-2d147c5f8ac6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.261785086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d276d130-665b-4135-9b06-1ccf2503e71a name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.261858995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d276d130-665b-4135-9b06-1ccf2503e71a name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.262657054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c17a389b-9c3b-4c9b-80d6-72183d16f9e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.263128356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410654263105724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c17a389b-9c3b-4c9b-80d6-72183d16f9e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.263630769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37aed5e1-a71c-48e6-9581-ce1c52a07dae name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.263679985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37aed5e1-a71c-48e6-9581-ce1c52a07dae name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:14 ha-094095 crio[659]: time="2024-10-08 18:04:14.263962446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37aed5e1-a71c-48e6-9581-ce1c52a07dae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4f194cdf306a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   eaf6acce4786e       busybox-7dff88458-n779r
	079e7a8fee78f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   875cfacbeeb23       coredns-7c65d6cfc9-6c7xl
	1eb4935d542c2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   9d8f70dc17585       coredns-7c65d6cfc9-ghz9x
	dfdfc8735b822       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   d884b794bcbf8       storage-provisioner
	17a4523dfe3c8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   c791fa497b85a       kindnet-mclfx
	347854044c294       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   29ed3e17d1aab       kube-proxy-gnmch
	8f117035b9a9a       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   13853a6e388f1       kube-vip-ha-094095
	9c418725a44b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b68b365f16def       etcd-ha-094095
	3b8241e00230e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c13c52688447       kube-apiserver-ha-094095
	0224d96e8ab1a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f021979b9e57f       kube-scheduler-ha-094095
	ec97e876ef66b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a2f40f00bb5ff       kube-controller-manager-ha-094095
	
	
	==> coredns [079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee] <==
	[INFO] 10.244.1.2:46939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173909s
	[INFO] 10.244.1.2:43197 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152065s
	[INFO] 10.244.0.4:54276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776636s
	[INFO] 10.244.0.4:42844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001027134s
	[INFO] 10.244.0.4:33552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087486s
	[INFO] 10.244.0.4:40894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128456s
	[INFO] 10.244.2.2:37156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090694s
	[INFO] 10.244.2.2:35975 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000342501s
	[INFO] 10.244.2.2:56819 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008022s
	[INFO] 10.244.2.2:40613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107574s
	[INFO] 10.244.1.2:38959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208641s
	[INFO] 10.244.0.4:58386 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011149s
	[INFO] 10.244.0.4:56827 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016311s
	[INFO] 10.244.0.4:52547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068216s
	[INFO] 10.244.0.4:59149 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077593s
	[INFO] 10.244.2.2:49444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156535s
	[INFO] 10.244.2.2:51787 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111699s
	[INFO] 10.244.2.2:52768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107964s
	[INFO] 10.244.2.2:53538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071551s
	[INFO] 10.244.1.2:52231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220976s
	[INFO] 10.244.0.4:45893 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145642s
	[INFO] 10.244.0.4:50564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012308s
	[INFO] 10.244.0.4:40912 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110407s
	[INFO] 10.244.2.2:48559 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182361s
	[INFO] 10.244.2.2:42189 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123843s
	
	
	==> coredns [1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02] <==
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000403051s
	[INFO] 10.244.2.2:33432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198542s
	[INFO] 10.244.2.2:43175 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00011602s
	[INFO] 10.244.2.2:39986 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00007233s
	[INFO] 10.244.2.2:43098 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001798194s
	[INFO] 10.244.1.2:51904 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006238586s
	[INFO] 10.244.1.2:39841 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245332s
	[INFO] 10.244.1.2:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010411466s
	[INFO] 10.244.0.4:36134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131817s
	[INFO] 10.244.0.4:60392 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136485s
	[INFO] 10.244.0.4:47750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001276s
	[INFO] 10.244.0.4:53066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112589s
	[INFO] 10.244.2.2:50951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171312s
	[INFO] 10.244.2.2:36151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001719697s
	[INFO] 10.244.2.2:59876 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00134295s
	[INFO] 10.244.2.2:34156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121408s
	[INFO] 10.244.1.2:40835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210172s
	[INFO] 10.244.1.2:35561 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210453s
	[INFO] 10.244.1.2:58285 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:57787 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236305s
	[INFO] 10.244.1.2:52947 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185701s
	[INFO] 10.244.1.2:38121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000200581s
	[INFO] 10.244.0.4:37934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195898s
	[INFO] 10.244.2.2:51605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210836s
	[INFO] 10.244.2.2:44666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117181s
	
	
	==> describe nodes <==
	Name:               ha-094095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:57:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-094095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f253fb8c294514826ad247cbfc784d
	  System UUID:                14f253fb-8c29-4514-826a-d247cbfc784d
	  Boot ID:                    6cdd0146-42c4-4814-93e6-3af5699e77ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-n779r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-6c7xl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-ghz9x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-094095                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m14s
	  kube-system                 kindnet-mclfx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-094095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-094095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-gnmch                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-094095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-094095                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node ha-094095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node ha-094095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node ha-094095 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-094095 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	
	
	Name:               ha-094095-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:01:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-094095-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6846904a528149b4bec4ab05607145f5
	  System UUID:                6846904a-5281-49b4-bec4-ab05607145f5
	  Boot ID:                    92a2dec0-2bc9-44db-94e9-e4a68690b144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxdk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-094095-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-f5x42                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-094095-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-094095-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-r55hk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-094095-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-vip-ha-094095-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-094095-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-094095-m02 status is now: NodeNotReady
	
	
	Name:               ha-094095-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    ha-094095-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cca5410c10d94705a0a750a2a36dfcf7
	  System UUID:                cca5410c-10d9-4705-a0a7-50a2a36dfcf7
	  Boot ID:                    a52600ea-f5af-4184-95ce-18bc5a4ff10e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rxwcg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-094095-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-8v7s4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m12s
	  kube-system                 kube-apiserver-ha-094095-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-094095-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-krxss                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-scheduler-ha-094095-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-094095-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-094095-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	
	
	Name:               ha-094095-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_01_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:01:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-094095-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6fe409be99242ac858632e59843d080
	  System UUID:                c6fe409b-e992-42ac-8586-32e59843d080
	  Boot ID:                    10df0150-6a8d-4d3e-8551-af1fe0638414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jhqlp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m5s
	  kube-system                 kube-proxy-jjgsh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m5s)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m5s)  kubelet          Node ha-094095-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m5s)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-094095-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 17:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050015] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039380] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.822235] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417178] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.589695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.867596] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.064259] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063997] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.185531] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.116355] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.250177] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.801506] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.578485] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.057293] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117363] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.084526] kauditd_printk_skb: 79 callbacks suppressed
	[Oct 8 17:58] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.243247] kauditd_printk_skb: 28 callbacks suppressed
	[ +42.891327] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7] <==
	{"level":"warn","ts":"2024-10-08T18:04:14.510237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.519036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.520797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.525569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.533976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.540261Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.544095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.545573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.548867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.552199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.555306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.562442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.569038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.575844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.578589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.581319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.589038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.594174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.599825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.602665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.605173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.608359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.614475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.620473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:14.632578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:04:14 up 6 min,  0 users,  load average: 0.43, 0.39, 0.20
	Linux ha-094095 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a] <==
	I1008 18:03:36.521090       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:03:46.529525       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:03:46.529570       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:03:46.529732       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:03:46.529757       1 main.go:299] handling current node
	I1008 18:03:46.529773       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:03:46.529798       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:03:46.529860       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:03:46.529884       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:03:56.530637       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:03:56.530728       1 main.go:299] handling current node
	I1008 18:03:56.530780       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:03:56.530799       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:03:56.530947       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:03:56.530969       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:03:56.531022       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:03:56.531040       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:06.521023       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:06.521156       1 main.go:299] handling current node
	I1008 18:04:06.521246       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:06.521314       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:06.521746       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:06.521831       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:06.522370       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:06.522563       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b] <==
	I1008 17:57:58.485779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 17:57:58.491495       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.99]
	I1008 17:57:58.492135       1 controller.go:615] quota admission added evaluator for: endpoints
	I1008 17:57:58.499200       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 17:57:58.903637       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 17:58:00.054350       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 17:58:00.074068       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 17:58:00.230930       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 17:58:03.854509       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1008 17:58:03.954697       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1008 18:00:38.037771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45714: use of closed network connection
	E1008 18:00:38.232043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45744: use of closed network connection
	E1008 18:00:38.418256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45748: use of closed network connection
	E1008 18:00:38.622516       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45768: use of closed network connection
	E1008 18:00:38.796785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45788: use of closed network connection
	E1008 18:00:38.988513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45812: use of closed network connection
	E1008 18:00:39.174560       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45828: use of closed network connection
	E1008 18:00:39.350317       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45850: use of closed network connection
	E1008 18:00:39.525813       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45854: use of closed network connection
	E1008 18:00:39.828048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49850: use of closed network connection
	E1008 18:00:40.000068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49874: use of closed network connection
	E1008 18:00:40.192753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49888: use of closed network connection
	E1008 18:00:40.379456       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49904: use of closed network connection
	E1008 18:00:40.562970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49918: use of closed network connection
	E1008 18:00:40.742948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49938: use of closed network connection
	
	
	==> kube-controller-manager [ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb] <==
	I1008 18:01:09.767306       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-094095-m04" podCIDRs=["10.244.3.0/24"]
	I1008 18:01:09.767482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.015142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.174634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.537159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.265250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.321671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716760       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-094095-m04"
	I1008 18:01:13.777151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:20.033294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108639       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:01:28.124876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.732886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:40.603842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:02:28.755242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.757889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:02:28.778675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.891800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.567817ms"
	I1008 18:02:28.891887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.019µs"
	I1008 18:02:30.013028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:33.959772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	
	
	==> kube-proxy [347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 17:58:05.534485       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 17:58:05.568766       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	E1008 17:58:05.568940       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 17:58:05.609153       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 17:58:05.609181       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 17:58:05.609201       1 server_linux.go:169] "Using iptables Proxier"
	I1008 17:58:05.612762       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 17:58:05.613968       1 server.go:483] "Version info" version="v1.31.1"
	I1008 17:58:05.614042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 17:58:05.616792       1 config.go:199] "Starting service config controller"
	I1008 17:58:05.617139       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 17:58:05.617374       1 config.go:105] "Starting endpoint slice config controller"
	I1008 17:58:05.617451       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 17:58:05.618851       1 config.go:328] "Starting node config controller"
	I1008 17:58:05.619090       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 17:58:05.718484       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 17:58:05.718497       1 shared_informer.go:320] Caches are synced for service config
	I1008 17:58:05.720100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20] <==
	E1008 18:00:30.199446       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rzflt" node="ha-094095-m03"
	E1008 18:00:30.199562       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e0ead4a-bdd7-4fe2-8070-a2e4680f7988(default/busybox-7dff88458-rzflt) was assumed on ha-094095-m03 but assigned to ha-094095-m02" pod="default/busybox-7dff88458-rzflt"
	E1008 18:00:30.201601       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-rzflt"
	I1008 18:00:30.201672       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rzflt" node="ha-094095-m02"
	E1008 18:00:30.241278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.243855       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 00074fc5-40f9-403b-9cec-3f333b177d47(default/busybox-7dff88458-2hz9n) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2hz9n"
	E1008 18:00:30.248134       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-2hz9n"
	I1008 18:00:30.248955       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.302814       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.303201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 399813b8-6199-4631-af76-66e7e8bf4b8c(default/busybox-7dff88458-rxwcg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rxwcg"
	E1008 18:00:30.303327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" pod="default/busybox-7dff88458-rxwcg"
	I1008 18:00:30.303461       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.454050       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-l6wvv\" not found" pod="default/busybox-7dff88458-l6wvv"
	E1008 18:01:09.806729       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.806888       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c9b872af-5075-4c26-99cf-282b077912ee(kube-system/kube-proxy-jjgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jjgsh"
	E1008 18:01:09.806916       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-jjgsh"
	I1008 18:01:09.806962       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.807512       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.807581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2f9978f0-fb58-41fb-ac79-c07ec22f8b12(kube-system/kindnet-jhqlp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jhqlp"
	E1008 18:01:09.807603       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" pod="kube-system/kindnet-jhqlp"
	I1008 18:01:09.807627       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.868191       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	E1008 18:01:09.869875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6257090e-676b-45ea-9261-104b1ba829f3(kube-system/kube-proxy-x5wf6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-x5wf6"
	E1008 18:01:09.871281       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-x5wf6"
	I1008 18:01:09.871556       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	
	
	==> kubelet <==
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:03:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293753    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293782    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295059    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295735    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297939    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297984    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300086    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300349    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302156    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302530    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304820    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304911    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.254307    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307018    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307069    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:10 ha-094095 kubelet[1309]: E1008 18:04:10.309307    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410650308966284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:10 ha-094095 kubelet[1309]: E1008 18:04:10.309339    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410650308966284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr: (4.22088947s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (1.332132378s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m03_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-094095 node start m02 -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:57:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:57:18.946903  548894 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:57:18.947145  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947153  548894 out.go:358] Setting ErrFile to fd 2...
	I1008 17:57:18.947157  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947344  548894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:57:18.947912  548894 out.go:352] Setting JSON to false
	I1008 17:57:18.948876  548894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5991,"bootTime":1728404248,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:57:18.948933  548894 start.go:139] virtualization: kvm guest
	I1008 17:57:18.950969  548894 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:57:18.952033  548894 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:57:18.952082  548894 notify.go:220] Checking for updates...
	I1008 17:57:18.954369  548894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:57:18.955681  548894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:57:18.956842  548894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:18.957830  548894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:57:18.959069  548894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:57:18.960234  548894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:57:18.994761  548894 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 17:57:18.995800  548894 start.go:297] selected driver: kvm2
	I1008 17:57:18.995813  548894 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:57:18.995824  548894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:57:18.996586  548894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:18.996660  548894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:57:19.011273  548894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:57:19.011313  548894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:57:19.011548  548894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:57:19.011585  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:19.011625  548894 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 17:57:19.011636  548894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 17:57:19.011687  548894 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:19.011804  548894 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:19.013449  548894 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 17:57:19.014789  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:19.014817  548894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 17:57:19.014826  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:57:19.014907  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:57:19.014919  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:57:19.015263  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:19.015288  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json: {Name:mk4a4bbfc5e4991434a64e3c2f362f3acde8e751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:19.015419  548894 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:57:19.015446  548894 start.go:364] duration metric: took 15.142µs to acquireMachinesLock for "ha-094095"
	I1008 17:57:19.015463  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:57:19.015507  548894 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 17:57:19.017014  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:57:19.017133  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:57:19.017171  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:57:19.031391  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I1008 17:57:19.031835  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:57:19.032448  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:57:19.032468  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:57:19.032843  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:57:19.033048  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:19.033189  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:19.033336  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:57:19.033367  548894 client.go:168] LocalClient.Create starting
	I1008 17:57:19.033396  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:57:19.033427  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033446  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033499  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:57:19.033517  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033530  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033545  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:57:19.033558  548894 main.go:141] libmachine: (ha-094095) Calling .PreCreateCheck
	I1008 17:57:19.033903  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:19.034253  548894 main.go:141] libmachine: Creating machine...
	I1008 17:57:19.034267  548894 main.go:141] libmachine: (ha-094095) Calling .Create
	I1008 17:57:19.034420  548894 main.go:141] libmachine: (ha-094095) Creating KVM machine...
	I1008 17:57:19.035565  548894 main.go:141] libmachine: (ha-094095) DBG | found existing default KVM network
	I1008 17:57:19.036249  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.036120  548918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1008 17:57:19.036283  548894 main.go:141] libmachine: (ha-094095) DBG | created network xml: 
	I1008 17:57:19.036302  548894 main.go:141] libmachine: (ha-094095) DBG | <network>
	I1008 17:57:19.036314  548894 main.go:141] libmachine: (ha-094095) DBG |   <name>mk-ha-094095</name>
	I1008 17:57:19.036323  548894 main.go:141] libmachine: (ha-094095) DBG |   <dns enable='no'/>
	I1008 17:57:19.036331  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036342  548894 main.go:141] libmachine: (ha-094095) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 17:57:19.036349  548894 main.go:141] libmachine: (ha-094095) DBG |     <dhcp>
	I1008 17:57:19.036361  548894 main.go:141] libmachine: (ha-094095) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 17:57:19.036370  548894 main.go:141] libmachine: (ha-094095) DBG |     </dhcp>
	I1008 17:57:19.036386  548894 main.go:141] libmachine: (ha-094095) DBG |   </ip>
	I1008 17:57:19.036427  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036447  548894 main.go:141] libmachine: (ha-094095) DBG | </network>
	I1008 17:57:19.036455  548894 main.go:141] libmachine: (ha-094095) DBG | 
	I1008 17:57:19.041263  548894 main.go:141] libmachine: (ha-094095) DBG | trying to create private KVM network mk-ha-094095 192.168.39.0/24...
	I1008 17:57:19.105180  548894 main.go:141] libmachine: (ha-094095) DBG | private KVM network mk-ha-094095 192.168.39.0/24 created
	I1008 17:57:19.105208  548894 main.go:141] libmachine: (ha-094095) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.105220  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.105167  548918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.105237  548894 main.go:141] libmachine: (ha-094095) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:57:19.105263  548894 main.go:141] libmachine: (ha-094095) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:57:19.385345  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.385226  548918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa...
	I1008 17:57:19.617977  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617838  548918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk...
	I1008 17:57:19.618008  548894 main.go:141] libmachine: (ha-094095) DBG | Writing magic tar header
	I1008 17:57:19.618021  548894 main.go:141] libmachine: (ha-094095) DBG | Writing SSH key tar header
	I1008 17:57:19.618031  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617973  548918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.618141  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095
	I1008 17:57:19.618165  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 (perms=drwx------)
	I1008 17:57:19.618171  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:57:19.618178  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:57:19.618187  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:57:19.618193  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:57:19.618199  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.618206  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:57:19.618211  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:57:19.618216  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:57:19.618224  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:57:19.618231  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home
	I1008 17:57:19.618238  548894 main.go:141] libmachine: (ha-094095) DBG | Skipping /home - not owner
	I1008 17:57:19.618249  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:57:19.618261  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:19.619347  548894 main.go:141] libmachine: (ha-094095) define libvirt domain using xml: 
	I1008 17:57:19.619369  548894 main.go:141] libmachine: (ha-094095) <domain type='kvm'>
	I1008 17:57:19.619378  548894 main.go:141] libmachine: (ha-094095)   <name>ha-094095</name>
	I1008 17:57:19.619388  548894 main.go:141] libmachine: (ha-094095)   <memory unit='MiB'>2200</memory>
	I1008 17:57:19.619396  548894 main.go:141] libmachine: (ha-094095)   <vcpu>2</vcpu>
	I1008 17:57:19.619402  548894 main.go:141] libmachine: (ha-094095)   <features>
	I1008 17:57:19.619410  548894 main.go:141] libmachine: (ha-094095)     <acpi/>
	I1008 17:57:19.619420  548894 main.go:141] libmachine: (ha-094095)     <apic/>
	I1008 17:57:19.619427  548894 main.go:141] libmachine: (ha-094095)     <pae/>
	I1008 17:57:19.619444  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619470  548894 main.go:141] libmachine: (ha-094095)   </features>
	I1008 17:57:19.619484  548894 main.go:141] libmachine: (ha-094095)   <cpu mode='host-passthrough'>
	I1008 17:57:19.619491  548894 main.go:141] libmachine: (ha-094095)   
	I1008 17:57:19.619500  548894 main.go:141] libmachine: (ha-094095)   </cpu>
	I1008 17:57:19.619506  548894 main.go:141] libmachine: (ha-094095)   <os>
	I1008 17:57:19.619515  548894 main.go:141] libmachine: (ha-094095)     <type>hvm</type>
	I1008 17:57:19.619527  548894 main.go:141] libmachine: (ha-094095)     <boot dev='cdrom'/>
	I1008 17:57:19.619536  548894 main.go:141] libmachine: (ha-094095)     <boot dev='hd'/>
	I1008 17:57:19.619547  548894 main.go:141] libmachine: (ha-094095)     <bootmenu enable='no'/>
	I1008 17:57:19.619559  548894 main.go:141] libmachine: (ha-094095)   </os>
	I1008 17:57:19.619569  548894 main.go:141] libmachine: (ha-094095)   <devices>
	I1008 17:57:19.619578  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='cdrom'>
	I1008 17:57:19.619590  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/boot2docker.iso'/>
	I1008 17:57:19.619601  548894 main.go:141] libmachine: (ha-094095)       <target dev='hdc' bus='scsi'/>
	I1008 17:57:19.619612  548894 main.go:141] libmachine: (ha-094095)       <readonly/>
	I1008 17:57:19.619621  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619648  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='disk'>
	I1008 17:57:19.619669  548894 main.go:141] libmachine: (ha-094095)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:57:19.619678  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk'/>
	I1008 17:57:19.619688  548894 main.go:141] libmachine: (ha-094095)       <target dev='hda' bus='virtio'/>
	I1008 17:57:19.619694  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619711  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619719  548894 main.go:141] libmachine: (ha-094095)       <source network='mk-ha-094095'/>
	I1008 17:57:19.619724  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619731  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619735  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619743  548894 main.go:141] libmachine: (ha-094095)       <source network='default'/>
	I1008 17:57:19.619747  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619752  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619756  548894 main.go:141] libmachine: (ha-094095)     <serial type='pty'>
	I1008 17:57:19.619763  548894 main.go:141] libmachine: (ha-094095)       <target port='0'/>
	I1008 17:57:19.619769  548894 main.go:141] libmachine: (ha-094095)     </serial>
	I1008 17:57:19.619798  548894 main.go:141] libmachine: (ha-094095)     <console type='pty'>
	I1008 17:57:19.619831  548894 main.go:141] libmachine: (ha-094095)       <target type='serial' port='0'/>
	I1008 17:57:19.619844  548894 main.go:141] libmachine: (ha-094095)     </console>
	I1008 17:57:19.619859  548894 main.go:141] libmachine: (ha-094095)     <rng model='virtio'>
	I1008 17:57:19.619885  548894 main.go:141] libmachine: (ha-094095)       <backend model='random'>/dev/random</backend>
	I1008 17:57:19.619895  548894 main.go:141] libmachine: (ha-094095)     </rng>
	I1008 17:57:19.619903  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619912  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619921  548894 main.go:141] libmachine: (ha-094095)   </devices>
	I1008 17:57:19.619930  548894 main.go:141] libmachine: (ha-094095) </domain>
	I1008 17:57:19.619943  548894 main.go:141] libmachine: (ha-094095) 
	I1008 17:57:19.623957  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:c2:1c:c1 in network default
	I1008 17:57:19.624533  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:19.624567  548894 main.go:141] libmachine: (ha-094095) Ensuring networks are active...
	I1008 17:57:19.625167  548894 main.go:141] libmachine: (ha-094095) Ensuring network default is active
	I1008 17:57:19.625513  548894 main.go:141] libmachine: (ha-094095) Ensuring network mk-ha-094095 is active
	I1008 17:57:19.626008  548894 main.go:141] libmachine: (ha-094095) Getting domain xml...
	I1008 17:57:19.626619  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:20.795900  548894 main.go:141] libmachine: (ha-094095) Waiting to get IP...
	I1008 17:57:20.796661  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:20.797068  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:20.797096  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:20.797046  548918 retry.go:31] will retry after 205.911312ms: waiting for machine to come up
	I1008 17:57:21.004526  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.004999  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.005029  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.004943  548918 retry.go:31] will retry after 273.425618ms: waiting for machine to come up
	I1008 17:57:21.280506  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.280861  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.280894  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.280804  548918 retry.go:31] will retry after 435.479274ms: waiting for machine to come up
	I1008 17:57:21.717289  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.717636  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.717662  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.717595  548918 retry.go:31] will retry after 576.307625ms: waiting for machine to come up
	I1008 17:57:22.295076  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.295499  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.295527  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.295461  548918 retry.go:31] will retry after 636.373654ms: waiting for machine to come up
	I1008 17:57:22.933047  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.933364  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.933391  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.933317  548918 retry.go:31] will retry after 741.414571ms: waiting for machine to come up
	I1008 17:57:23.676038  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:23.676368  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:23.676441  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:23.676362  548918 retry.go:31] will retry after 726.748749ms: waiting for machine to come up
	I1008 17:57:24.404401  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:24.404771  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:24.404801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:24.404726  548918 retry.go:31] will retry after 1.449573768s: waiting for machine to come up
	I1008 17:57:25.856490  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:25.856930  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:25.856961  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:25.856877  548918 retry.go:31] will retry after 1.340937339s: waiting for machine to come up
	I1008 17:57:27.199433  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:27.199826  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:27.199863  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:27.199804  548918 retry.go:31] will retry after 1.798441674s: waiting for machine to come up
	I1008 17:57:28.999424  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:28.999921  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:28.999945  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:28.999873  548918 retry.go:31] will retry after 1.937304185s: waiting for machine to come up
	I1008 17:57:30.939309  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:30.939791  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:30.939819  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:30.939738  548918 retry.go:31] will retry after 3.500432638s: waiting for machine to come up
	I1008 17:57:34.441923  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:34.442356  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:34.442385  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:34.442290  548918 retry.go:31] will retry after 3.09089187s: waiting for machine to come up
	I1008 17:57:37.536439  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:37.536781  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:37.536801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:37.536736  548918 retry.go:31] will retry after 5.395822577s: waiting for machine to come up
	I1008 17:57:42.937057  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937477  548894 main.go:141] libmachine: (ha-094095) Found IP for machine: 192.168.39.99
	I1008 17:57:42.937503  548894 main.go:141] libmachine: (ha-094095) Reserving static IP address...
	I1008 17:57:42.937532  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has current primary IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937886  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find host DHCP lease matching {name: "ha-094095", mac: "52:54:00:bf:fa:3a", ip: "192.168.39.99"} in network mk-ha-094095
	I1008 17:57:43.006083  548894 main.go:141] libmachine: (ha-094095) DBG | Getting to WaitForSSH function...
	I1008 17:57:43.006114  548894 main.go:141] libmachine: (ha-094095) Reserved static IP address: 192.168.39.99
	I1008 17:57:43.006128  548894 main.go:141] libmachine: (ha-094095) Waiting for SSH to be available...
	I1008 17:57:43.008468  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.008879  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.008907  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.009020  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH client type: external
	I1008 17:57:43.009041  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa (-rw-------)
	I1008 17:57:43.009062  548894 main.go:141] libmachine: (ha-094095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:57:43.009119  548894 main.go:141] libmachine: (ha-094095) DBG | About to run SSH command:
	I1008 17:57:43.009138  548894 main.go:141] libmachine: (ha-094095) DBG | exit 0
	I1008 17:57:43.130112  548894 main.go:141] libmachine: (ha-094095) DBG | SSH cmd err, output: <nil>: 
	I1008 17:57:43.130367  548894 main.go:141] libmachine: (ha-094095) KVM machine creation complete!
	I1008 17:57:43.130653  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:43.131203  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131384  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131553  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:57:43.131567  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:57:43.132696  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:57:43.132710  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:57:43.132718  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:57:43.132724  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.134855  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135157  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.135186  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135341  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.135500  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135635  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135753  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.135900  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.136116  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.136132  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:57:43.237532  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.237562  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:57:43.237573  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.240102  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240361  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.240386  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240541  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.240728  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.240888  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.241033  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.241194  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.241372  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.241387  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:57:43.342754  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:57:43.342848  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:57:43.342862  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:57:43.342875  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343129  548894 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 17:57:43.343169  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343355  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.345781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346150  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.346172  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346401  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.346572  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346747  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346898  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.347071  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.347247  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.347259  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 17:57:43.463654  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 17:57:43.463696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.466255  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466646  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.466682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466840  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.467010  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467143  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467243  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.467378  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.467581  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.467603  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:57:43.579438  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.579474  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:57:43.579515  548894 buildroot.go:174] setting up certificates
	I1008 17:57:43.579525  548894 provision.go:84] configureAuth start
	I1008 17:57:43.579536  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.579814  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:43.582136  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582503  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.582528  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.584820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585187  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.585207  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585310  548894 provision.go:143] copyHostCerts
	I1008 17:57:43.585352  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585401  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:57:43.585412  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585494  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:57:43.585624  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585659  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:57:43.585677  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585716  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:57:43.585797  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585818  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:57:43.585827  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585862  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:57:43.585945  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 17:57:43.673469  548894 provision.go:177] copyRemoteCerts
	I1008 17:57:43.673538  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:57:43.673570  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.676617  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.676907  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.676942  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.677124  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.677287  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.677489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.677596  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:43.759344  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:57:43.759416  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 17:57:43.781917  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:57:43.781981  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:57:43.804256  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:57:43.804312  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:57:43.826921  548894 provision.go:87] duration metric: took 247.384803ms to configureAuth
	I1008 17:57:43.826944  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:57:43.827107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:57:43.827185  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.830340  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830654  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.830685  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830917  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.831091  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831234  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831362  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.831590  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.831761  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.831775  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:57:44.043562  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:57:44.043593  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:57:44.043602  548894 main.go:141] libmachine: (ha-094095) Calling .GetURL
	I1008 17:57:44.044870  548894 main.go:141] libmachine: (ha-094095) DBG | Using libvirt version 6000000
	I1008 17:57:44.047119  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047449  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.047478  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047637  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:57:44.047652  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:57:44.047661  548894 client.go:171] duration metric: took 25.014282218s to LocalClient.Create
	I1008 17:57:44.047690  548894 start.go:167] duration metric: took 25.014354001s to libmachine.API.Create "ha-094095"
	I1008 17:57:44.047702  548894 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 17:57:44.047716  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:57:44.047739  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.048014  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:57:44.048045  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.050022  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050306  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.050347  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050505  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.050666  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.050837  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.050949  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.132504  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:57:44.136621  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:57:44.136645  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:57:44.136713  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:57:44.136806  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:57:44.136818  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:57:44.136924  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:57:44.146103  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:44.168356  548894 start.go:296] duration metric: took 120.640584ms for postStartSetup
	I1008 17:57:44.168411  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:44.169087  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.172425  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.172799  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.172823  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.173056  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:44.173256  548894 start.go:128] duration metric: took 25.157738621s to createHost
	I1008 17:57:44.173281  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.175394  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175685  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.175711  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175872  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.176022  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176162  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176257  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.176381  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:44.176571  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:44.176587  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:57:44.278668  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410264.248509692
	
	I1008 17:57:44.278691  548894 fix.go:216] guest clock: 1728410264.248509692
	I1008 17:57:44.278710  548894 fix.go:229] Guest: 2024-10-08 17:57:44.248509692 +0000 UTC Remote: 2024-10-08 17:57:44.173269639 +0000 UTC m=+25.264229848 (delta=75.240053ms)
	I1008 17:57:44.278730  548894 fix.go:200] guest clock delta is within tolerance: 75.240053ms
	I1008 17:57:44.278735  548894 start.go:83] releasing machines lock for "ha-094095", held for 25.26328044s
	I1008 17:57:44.278761  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.279011  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.281403  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281704  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.281728  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281844  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282331  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282492  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282608  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:57:44.282649  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.282695  548894 ssh_runner.go:195] Run: cat /version.json
	I1008 17:57:44.282718  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.285197  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285467  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285561  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285596  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285720  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.285878  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.285947  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285972  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.286009  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286152  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.286166  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.286407  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.286555  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286685  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.362923  548894 ssh_runner.go:195] Run: systemctl --version
	I1008 17:57:44.382917  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:57:44.543848  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:57:44.549734  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:57:44.549799  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:57:44.566434  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:57:44.566456  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:57:44.566531  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:57:44.582382  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:57:44.595796  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:57:44.595845  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:57:44.608932  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:57:44.621723  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:57:44.737514  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:57:44.894846  548894 docker.go:233] disabling docker service ...
	I1008 17:57:44.894913  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:57:44.908802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:57:44.920944  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:57:45.040515  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:57:45.156709  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:57:45.170339  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:57:45.188088  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:57:45.188162  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.197887  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:57:45.197965  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.207765  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.217192  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.226820  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:57:45.236401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.246021  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.261908  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.271409  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:57:45.280221  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:57:45.280279  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:57:45.293099  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:57:45.301781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:45.406440  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:57:45.492188  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:57:45.492292  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:57:45.496696  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:57:45.496749  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:57:45.500380  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:57:45.538828  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:57:45.538916  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.566412  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.594012  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:57:45.595183  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:45.597820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598135  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:45.598169  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598406  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:57:45.602368  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:45.614968  548894 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 17:57:45.615076  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:45.615144  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:45.645417  548894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 17:57:45.645488  548894 ssh_runner.go:195] Run: which lz4
	I1008 17:57:45.649242  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1008 17:57:45.649331  548894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 17:57:45.653358  548894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 17:57:45.653398  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 17:57:46.900415  548894 crio.go:462] duration metric: took 1.251111162s to copy over tarball
	I1008 17:57:46.900502  548894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 17:57:48.824951  548894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.92441022s)
	I1008 17:57:48.824989  548894 crio.go:469] duration metric: took 1.924546326s to extract the tarball
	I1008 17:57:48.825000  548894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 17:57:48.862916  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:48.914586  548894 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 17:57:48.914611  548894 cache_images.go:84] Images are preloaded, skipping loading
	I1008 17:57:48.914620  548894 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 17:57:48.914713  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:57:48.914782  548894 ssh_runner.go:195] Run: crio config
	I1008 17:57:48.965231  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:48.965254  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:57:48.965272  548894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 17:57:48.965293  548894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 17:57:48.965430  548894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 17:57:48.965457  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:57:48.965957  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:57:48.984862  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:57:48.984960  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:57:48.985020  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:57:48.994069  548894 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 17:57:48.994134  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 17:57:49.003013  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 17:57:49.018952  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:57:49.034270  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 17:57:49.049856  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1008 17:57:49.065212  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:57:49.068890  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:49.080238  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:49.207273  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:57:49.224685  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 17:57:49.224709  548894 certs.go:194] generating shared ca certs ...
	I1008 17:57:49.224731  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.224901  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:57:49.224958  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:57:49.224972  548894 certs.go:256] generating profile certs ...
	I1008 17:57:49.225044  548894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:57:49.225073  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt with IP's: []
	I1008 17:57:49.321305  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt ...
	I1008 17:57:49.321342  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt: {Name:mkc9007ec871f6b1b480e3b611a05707a64a5848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321530  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key ...
	I1008 17:57:49.321546  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key: {Name:mke9b241dc151acd2e67df3e03efa92395ed4dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321647  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc
	I1008 17:57:49.321666  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.254]
	I1008 17:57:49.615476  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc ...
	I1008 17:57:49.615508  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc: {Name:mk28ddc8f9cdc62c03babb0aa78423717078ec15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615696  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc ...
	I1008 17:57:49.615715  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc: {Name:mk7165300ee0dd42df7c546caae76a339625e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615817  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:57:49.615941  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:57:49.616029  548894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:57:49.616053  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt with IP's: []
	I1008 17:57:49.700382  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt ...
	I1008 17:57:49.700415  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt: {Name:mk23273db76b4a6b0f12257e27a1a06fa6830ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700587  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key ...
	I1008 17:57:49.700602  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key: {Name:mk0eecaa249eaee41f1ee6112c7eb1585a4e7c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700724  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:57:49.700753  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:57:49.700768  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:57:49.700784  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:57:49.700811  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:57:49.700836  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:57:49.700855  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:57:49.700874  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:57:49.700934  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:57:49.700987  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:57:49.701002  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:57:49.701037  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:57:49.701072  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:57:49.701103  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:57:49.701155  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:49.701193  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:49.701232  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:57:49.701259  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:57:49.701875  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:57:49.727666  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:57:49.750886  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:57:49.773442  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:57:49.797562  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 17:57:49.820463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:57:49.843011  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:57:49.866615  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:57:49.889741  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:57:49.912810  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:57:49.936333  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:57:49.960454  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 17:57:49.979469  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:57:49.985669  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:57:49.997465  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003200  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003257  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.009543  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:57:50.024695  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:57:50.038764  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044608  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044730  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.050835  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:57:50.061168  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:57:50.071347  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075705  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075749  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.081172  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:57:50.091550  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:57:50.095476  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:57:50.095534  548894 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:50.095625  548894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 17:57:50.095693  548894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 17:57:50.141057  548894 cri.go:89] found id: ""
	I1008 17:57:50.141128  548894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 17:57:50.155661  548894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 17:57:50.164965  548894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 17:57:50.174132  548894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 17:57:50.174150  548894 kubeadm.go:157] found existing configuration files:
	
	I1008 17:57:50.174193  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 17:57:50.182760  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 17:57:50.182801  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 17:57:50.191921  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 17:57:50.200321  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 17:57:50.200379  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 17:57:50.209419  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.217728  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 17:57:50.217774  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.226543  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 17:57:50.234817  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 17:57:50.234864  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 17:57:50.243553  548894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 17:57:50.351407  548894 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 17:57:50.351505  548894 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 17:57:50.448058  548894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 17:57:50.448219  548894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 17:57:50.448390  548894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 17:57:50.458228  548894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 17:57:50.561945  548894 out.go:235]   - Generating certificates and keys ...
	I1008 17:57:50.562071  548894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 17:57:50.562160  548894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 17:57:50.581396  548894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 17:57:50.643567  548894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 17:57:50.777590  548894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 17:57:50.908209  548894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 17:57:51.030015  548894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 17:57:51.030180  548894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.147196  548894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 17:57:51.147407  548894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.301954  548894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 17:57:51.401522  548894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 17:57:51.537212  548894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 17:57:51.537477  548894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 17:57:51.996984  548894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 17:57:52.232782  548894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 17:57:52.360403  548894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 17:57:52.550793  548894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 17:57:52.645896  548894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 17:57:52.646431  548894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 17:57:52.649705  548894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 17:57:52.693095  548894 out.go:235]   - Booting up control plane ...
	I1008 17:57:52.693231  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 17:57:52.693301  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 17:57:52.693399  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 17:57:52.693595  548894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 17:57:52.693726  548894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 17:57:52.693765  548894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 17:57:52.808206  548894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 17:57:52.808366  548894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 17:57:53.309429  548894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.545044ms
	I1008 17:57:53.309511  548894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 17:57:59.231916  548894 kubeadm.go:310] [api-check] The API server is healthy after 5.925563733s
	I1008 17:57:59.243298  548894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 17:57:59.259662  548894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 17:57:59.788254  548894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 17:57:59.788485  548894 kubeadm.go:310] [mark-control-plane] Marking the node ha-094095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 17:57:59.797286  548894 kubeadm.go:310] [bootstrap-token] Using token: 3mfy3k.85hms8dtl8svlvkm
	I1008 17:57:59.798387  548894 out.go:235]   - Configuring RBAC rules ...
	I1008 17:57:59.798518  548894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 17:57:59.805485  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 17:57:59.816460  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 17:57:59.820883  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 17:57:59.823643  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 17:57:59.826562  548894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 17:57:59.838159  548894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 17:58:00.096325  548894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 17:58:00.637130  548894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 17:58:00.638100  548894 kubeadm.go:310] 
	I1008 17:58:00.638187  548894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 17:58:00.638198  548894 kubeadm.go:310] 
	I1008 17:58:00.638289  548894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 17:58:00.638337  548894 kubeadm.go:310] 
	I1008 17:58:00.638388  548894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 17:58:00.638476  548894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 17:58:00.638558  548894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 17:58:00.638573  548894 kubeadm.go:310] 
	I1008 17:58:00.638644  548894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 17:58:00.638654  548894 kubeadm.go:310] 
	I1008 17:58:00.638715  548894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 17:58:00.638725  548894 kubeadm.go:310] 
	I1008 17:58:00.638784  548894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 17:58:00.638864  548894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 17:58:00.638920  548894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 17:58:00.638927  548894 kubeadm.go:310] 
	I1008 17:58:00.638996  548894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 17:58:00.639061  548894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 17:58:00.639067  548894 kubeadm.go:310] 
	I1008 17:58:00.639138  548894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639257  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 17:58:00.639298  548894 kubeadm.go:310] 	--control-plane 
	I1008 17:58:00.639308  548894 kubeadm.go:310] 
	I1008 17:58:00.639444  548894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 17:58:00.639453  548894 kubeadm.go:310] 
	I1008 17:58:00.639547  548894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639692  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 17:58:00.640765  548894 kubeadm.go:310] W1008 17:57:50.322627     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.640999  548894 kubeadm.go:310] W1008 17:57:50.323512     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.641121  548894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 17:58:00.641159  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:58:00.641169  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:58:00.643434  548894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1008 17:58:00.644444  548894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 17:58:00.650209  548894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1008 17:58:00.650224  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 17:58:00.677687  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 17:58:01.011782  548894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 17:58:01.011872  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.011918  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095 minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=true
	I1008 17:58:01.050127  548894 ops.go:34] apiserver oom_adj: -16
	I1008 17:58:01.121355  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.622435  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.121789  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.621637  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.121512  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.621993  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.121641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.621728  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.753917  548894 kubeadm.go:1113] duration metric: took 3.742110374s to wait for elevateKubeSystemPrivileges
	I1008 17:58:04.753962  548894 kubeadm.go:394] duration metric: took 14.658436547s to StartCluster
	I1008 17:58:04.753985  548894 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.754071  548894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.755006  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.755245  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 17:58:04.755258  548894 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:04.755285  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:58:04.755305  548894 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 17:58:04.755395  548894 addons.go:69] Setting storage-provisioner=true in profile "ha-094095"
	I1008 17:58:04.755421  548894 addons.go:234] Setting addon storage-provisioner=true in "ha-094095"
	I1008 17:58:04.755423  548894 addons.go:69] Setting default-storageclass=true in profile "ha-094095"
	I1008 17:58:04.755450  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.755463  548894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-094095"
	I1008 17:58:04.755954  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:04.756015  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756060  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.756153  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756178  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.771314  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I1008 17:58:04.771411  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1008 17:58:04.771715  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.771845  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.772259  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772280  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772399  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772421  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772677  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772761  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772921  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.773166  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.773207  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.775127  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.775464  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 17:58:04.776098  548894 cert_rotation.go:140] Starting client certificate rotation controller
	I1008 17:58:04.776464  548894 addons.go:234] Setting addon default-storageclass=true in "ha-094095"
	I1008 17:58:04.776513  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.776901  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.776950  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.788872  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I1008 17:58:04.789408  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.789954  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.789982  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.790391  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.790585  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.791166  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1008 17:58:04.791602  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.792075  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.792102  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.792300  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.792437  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.792883  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.792921  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.794070  548894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 17:58:04.795292  548894 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:04.795314  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 17:58:04.795333  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.798275  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798778  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.798817  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798959  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.799152  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.799319  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.799447  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.807217  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1008 17:58:04.807681  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.808084  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.808108  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.808466  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.808664  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.810084  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.810282  548894 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:04.810305  548894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 17:58:04.810351  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.813002  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813401  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.813426  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813628  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.813798  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.813951  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.814091  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.894935  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 17:58:04.989822  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:05.005242  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:05.480020  548894 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 17:58:05.749086  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749116  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749148  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749170  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749410  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749425  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749434  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749440  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749521  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749536  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749550  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749557  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749608  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749908  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749943  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750036  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749970  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.750103  548894 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 17:58:05.749988  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750114  548894 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 17:58:05.750160  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.750219  548894 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1008 17:58:05.750231  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.750241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.750250  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.762332  548894 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1008 17:58:05.763152  548894 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1008 17:58:05.763172  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.763185  548894 round_trippers.go:473]     Content-Type: application/json
	I1008 17:58:05.763193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.763197  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.765314  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:58:05.765554  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.765571  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.765856  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.765872  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.765886  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.768201  548894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1008 17:58:05.769166  548894 addons.go:510] duration metric: took 1.013864152s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 17:58:05.769206  548894 start.go:246] waiting for cluster config update ...
	I1008 17:58:05.769221  548894 start.go:255] writing updated cluster config ...
	I1008 17:58:05.770624  548894 out.go:201] 
	I1008 17:58:05.771889  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:05.771979  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.773435  548894 out.go:177] * Starting "ha-094095-m02" control-plane node in "ha-094095" cluster
	I1008 17:58:05.774389  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:58:05.774416  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:58:05.774517  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:58:05.774543  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:58:05.774635  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.774827  548894 start.go:360] acquireMachinesLock for ha-094095-m02: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:58:05.774885  548894 start.go:364] duration metric: took 34.657µs to acquireMachinesLock for "ha-094095-m02"
	I1008 17:58:05.774908  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:05.775005  548894 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1008 17:58:05.776351  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:58:05.776440  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:05.776482  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:05.791492  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I1008 17:58:05.791992  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:05.792464  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:05.792487  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:05.792786  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:05.792949  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:05.793054  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:05.793160  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:58:05.793192  548894 client.go:168] LocalClient.Create starting
	I1008 17:58:05.793230  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:58:05.793268  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793289  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793356  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:58:05.793382  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793399  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793425  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:58:05.793436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .PreCreateCheck
	I1008 17:58:05.793636  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:05.793961  548894 main.go:141] libmachine: Creating machine...
	I1008 17:58:05.793974  548894 main.go:141] libmachine: (ha-094095-m02) Calling .Create
	I1008 17:58:05.794087  548894 main.go:141] libmachine: (ha-094095-m02) Creating KVM machine...
	I1008 17:58:05.795174  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing default KVM network
	I1008 17:58:05.795373  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing private KVM network mk-ha-094095
	I1008 17:58:05.795488  548894 main.go:141] libmachine: (ha-094095-m02) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:05.795518  548894 main.go:141] libmachine: (ha-094095-m02) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:58:05.795590  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:05.795498  549282 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:05.795693  548894 main.go:141] libmachine: (ha-094095-m02) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:58:06.080254  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.080126  549282 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa...
	I1008 17:58:06.408665  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408546  549282 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk...
	I1008 17:58:06.408701  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing magic tar header
	I1008 17:58:06.408716  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing SSH key tar header
	I1008 17:58:06.408729  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408669  549282 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:06.408798  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02
	I1008 17:58:06.408863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:58:06.408916  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 (perms=drwx------)
	I1008 17:58:06.408935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:06.408946  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:58:06.408954  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:58:06.408966  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:58:06.408972  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home
	I1008 17:58:06.408988  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Skipping /home - not owner
	I1008 17:58:06.409003  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:58:06.409013  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:58:06.409022  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:58:06.409038  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:58:06.409050  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:58:06.409060  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:06.410262  548894 main.go:141] libmachine: (ha-094095-m02) define libvirt domain using xml: 
	I1008 17:58:06.410280  548894 main.go:141] libmachine: (ha-094095-m02) <domain type='kvm'>
	I1008 17:58:06.410300  548894 main.go:141] libmachine: (ha-094095-m02)   <name>ha-094095-m02</name>
	I1008 17:58:06.410310  548894 main.go:141] libmachine: (ha-094095-m02)   <memory unit='MiB'>2200</memory>
	I1008 17:58:06.410330  548894 main.go:141] libmachine: (ha-094095-m02)   <vcpu>2</vcpu>
	I1008 17:58:06.410344  548894 main.go:141] libmachine: (ha-094095-m02)   <features>
	I1008 17:58:06.410353  548894 main.go:141] libmachine: (ha-094095-m02)     <acpi/>
	I1008 17:58:06.410361  548894 main.go:141] libmachine: (ha-094095-m02)     <apic/>
	I1008 17:58:06.410367  548894 main.go:141] libmachine: (ha-094095-m02)     <pae/>
	I1008 17:58:06.410371  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410376  548894 main.go:141] libmachine: (ha-094095-m02)   </features>
	I1008 17:58:06.410383  548894 main.go:141] libmachine: (ha-094095-m02)   <cpu mode='host-passthrough'>
	I1008 17:58:06.410388  548894 main.go:141] libmachine: (ha-094095-m02)   
	I1008 17:58:06.410392  548894 main.go:141] libmachine: (ha-094095-m02)   </cpu>
	I1008 17:58:06.410397  548894 main.go:141] libmachine: (ha-094095-m02)   <os>
	I1008 17:58:06.410403  548894 main.go:141] libmachine: (ha-094095-m02)     <type>hvm</type>
	I1008 17:58:06.410408  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='cdrom'/>
	I1008 17:58:06.410418  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='hd'/>
	I1008 17:58:06.410430  548894 main.go:141] libmachine: (ha-094095-m02)     <bootmenu enable='no'/>
	I1008 17:58:06.410440  548894 main.go:141] libmachine: (ha-094095-m02)   </os>
	I1008 17:58:06.410448  548894 main.go:141] libmachine: (ha-094095-m02)   <devices>
	I1008 17:58:06.410456  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='cdrom'>
	I1008 17:58:06.410468  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/boot2docker.iso'/>
	I1008 17:58:06.410474  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hdc' bus='scsi'/>
	I1008 17:58:06.410479  548894 main.go:141] libmachine: (ha-094095-m02)       <readonly/>
	I1008 17:58:06.410485  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410515  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='disk'>
	I1008 17:58:06.410542  548894 main.go:141] libmachine: (ha-094095-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:58:06.410557  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk'/>
	I1008 17:58:06.410568  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hda' bus='virtio'/>
	I1008 17:58:06.410582  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410592  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410604  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='mk-ha-094095'/>
	I1008 17:58:06.410613  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410622  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410630  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410642  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='default'/>
	I1008 17:58:06.410661  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410673  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410683  548894 main.go:141] libmachine: (ha-094095-m02)     <serial type='pty'>
	I1008 17:58:06.410692  548894 main.go:141] libmachine: (ha-094095-m02)       <target port='0'/>
	I1008 17:58:06.410700  548894 main.go:141] libmachine: (ha-094095-m02)     </serial>
	I1008 17:58:06.410712  548894 main.go:141] libmachine: (ha-094095-m02)     <console type='pty'>
	I1008 17:58:06.410727  548894 main.go:141] libmachine: (ha-094095-m02)       <target type='serial' port='0'/>
	I1008 17:58:06.410741  548894 main.go:141] libmachine: (ha-094095-m02)     </console>
	I1008 17:58:06.410750  548894 main.go:141] libmachine: (ha-094095-m02)     <rng model='virtio'>
	I1008 17:58:06.410761  548894 main.go:141] libmachine: (ha-094095-m02)       <backend model='random'>/dev/random</backend>
	I1008 17:58:06.410771  548894 main.go:141] libmachine: (ha-094095-m02)     </rng>
	I1008 17:58:06.410780  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410787  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410796  548894 main.go:141] libmachine: (ha-094095-m02)   </devices>
	I1008 17:58:06.410804  548894 main.go:141] libmachine: (ha-094095-m02) </domain>
	I1008 17:58:06.410828  548894 main.go:141] libmachine: (ha-094095-m02) 
	I1008 17:58:06.418030  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:0f:fc:b1 in network default
	I1008 17:58:06.418595  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring networks are active...
	I1008 17:58:06.418616  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:06.419273  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network default is active
	I1008 17:58:06.419679  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network mk-ha-094095 is active
	I1008 17:58:06.420099  548894 main.go:141] libmachine: (ha-094095-m02) Getting domain xml...
	I1008 17:58:06.420774  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:07.625613  548894 main.go:141] libmachine: (ha-094095-m02) Waiting to get IP...
	I1008 17:58:07.626394  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.626834  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.626863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.626812  549282 retry.go:31] will retry after 298.191028ms: waiting for machine to come up
	I1008 17:58:07.926517  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.926935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.926967  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.926892  549282 retry.go:31] will retry after 251.007436ms: waiting for machine to come up
	I1008 17:58:08.179311  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.179723  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.179753  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.179684  549282 retry.go:31] will retry after 369.990509ms: waiting for machine to come up
	I1008 17:58:08.551209  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.551664  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.551688  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.551618  549282 retry.go:31] will retry after 529.446819ms: waiting for machine to come up
	I1008 17:58:09.082289  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.082764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.082787  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.082730  549282 retry.go:31] will retry after 698.772609ms: waiting for machine to come up
	I1008 17:58:09.782428  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.783035  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.783077  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.782975  549282 retry.go:31] will retry after 749.123701ms: waiting for machine to come up
	I1008 17:58:10.533886  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:10.534374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:10.534406  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:10.534314  549282 retry.go:31] will retry after 748.167347ms: waiting for machine to come up
	I1008 17:58:11.284374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:11.284764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:11.284793  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:11.284726  549282 retry.go:31] will retry after 1.314312212s: waiting for machine to come up
	I1008 17:58:12.600256  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:12.600675  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:12.600706  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:12.600619  549282 retry.go:31] will retry after 1.264771643s: waiting for machine to come up
	I1008 17:58:13.867255  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:13.867784  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:13.867816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:13.867728  549282 retry.go:31] will retry after 2.081210662s: waiting for machine to come up
	I1008 17:58:15.950893  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:15.951309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:15.951341  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:15.951258  549282 retry.go:31] will retry after 2.823132453s: waiting for machine to come up
	I1008 17:58:18.778198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:18.778573  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:18.778605  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:18.778535  549282 retry.go:31] will retry after 2.715237967s: waiting for machine to come up
	I1008 17:58:21.495309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:21.495754  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:21.495780  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:21.495712  549282 retry.go:31] will retry after 2.962404474s: waiting for machine to come up
	I1008 17:58:24.461815  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:24.462170  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:24.462198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:24.462131  549282 retry.go:31] will retry after 4.711440731s: waiting for machine to come up
	I1008 17:58:29.176935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177439  548894 main.go:141] libmachine: (ha-094095-m02) Found IP for machine: 192.168.39.65
	I1008 17:58:29.177459  548894 main.go:141] libmachine: (ha-094095-m02) Reserving static IP address...
	I1008 17:58:29.177467  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177881  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find host DHCP lease matching {name: "ha-094095-m02", mac: "52:54:00:28:c9:b2", ip: "192.168.39.65"} in network mk-ha-094095
	I1008 17:58:29.250979  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Getting to WaitForSSH function...
	I1008 17:58:29.251007  548894 main.go:141] libmachine: (ha-094095-m02) Reserved static IP address: 192.168.39.65
	I1008 17:58:29.251020  548894 main.go:141] libmachine: (ha-094095-m02) Waiting for SSH to be available...
	I1008 17:58:29.253304  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253715  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.253745  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253826  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH client type: external
	I1008 17:58:29.253858  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa (-rw-------)
	I1008 17:58:29.253895  548894 main.go:141] libmachine: (ha-094095-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:58:29.253928  548894 main.go:141] libmachine: (ha-094095-m02) DBG | About to run SSH command:
	I1008 17:58:29.253953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | exit 0
	I1008 17:58:29.377997  548894 main.go:141] libmachine: (ha-094095-m02) DBG | SSH cmd err, output: <nil>: 
	I1008 17:58:29.378287  548894 main.go:141] libmachine: (ha-094095-m02) KVM machine creation complete!
	I1008 17:58:29.378621  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:29.379167  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379376  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379500  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:58:29.379514  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 17:58:29.380658  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:58:29.380670  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:58:29.380676  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:58:29.380683  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.382734  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383074  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.383097  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383251  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.383416  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383613  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383753  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.383914  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.384122  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.384133  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:58:29.485427  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.485449  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:58:29.485460  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.488012  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488364  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.488395  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488586  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.488786  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.488953  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.489087  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.489247  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.489514  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.489530  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:58:29.590445  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:58:29.590532  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:58:29.590542  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:58:29.590551  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.590782  548894 buildroot.go:166] provisioning hostname "ha-094095-m02"
	I1008 17:58:29.590806  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.591021  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.593666  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594067  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.594096  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594246  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.594404  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594554  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594724  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.594891  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.595109  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.595125  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m02 && echo "ha-094095-m02" | sudo tee /etc/hostname
	I1008 17:58:29.714147  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m02
	
	I1008 17:58:29.714180  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.716973  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717353  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.717384  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717565  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.717752  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.717913  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.718050  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.718222  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.718416  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.718433  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:58:29.831586  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.831619  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:58:29.831636  548894 buildroot.go:174] setting up certificates
	I1008 17:58:29.831645  548894 provision.go:84] configureAuth start
	I1008 17:58:29.831659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.831944  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:29.834827  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835217  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.835237  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.837816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.838223  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838374  548894 provision.go:143] copyHostCerts
	I1008 17:58:29.838406  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838440  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:58:29.838448  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838513  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:58:29.838598  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838615  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:58:29.838620  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838643  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:58:29.838682  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838698  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:58:29.838704  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838730  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:58:29.838774  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m02 san=[127.0.0.1 192.168.39.65 ha-094095-m02 localhost minikube]
	I1008 17:58:29.938554  548894 provision.go:177] copyRemoteCerts
	I1008 17:58:29.938614  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:58:29.938646  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.941344  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941644  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.941673  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941805  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.942003  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.942163  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.942301  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.024548  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:58:30.024622  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:58:30.049270  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:58:30.049353  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:58:30.073294  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:58:30.073363  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:58:30.097034  548894 provision.go:87] duration metric: took 265.374667ms to configureAuth
	I1008 17:58:30.097066  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:58:30.097258  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:30.097336  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.100086  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100367  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.100397  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100547  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.100709  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.100901  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.101076  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.101293  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.101528  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.101554  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:58:30.316444  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:58:30.316471  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:58:30.316479  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetURL
	I1008 17:58:30.317802  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using libvirt version 6000000
	I1008 17:58:30.320137  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320544  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.320587  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320709  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:58:30.320718  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:58:30.320726  548894 client.go:171] duration metric: took 24.527519698s to LocalClient.Create
	I1008 17:58:30.320756  548894 start.go:167] duration metric: took 24.527598536s to libmachine.API.Create "ha-094095"
	I1008 17:58:30.320770  548894 start.go:293] postStartSetup for "ha-094095-m02" (driver="kvm2")
	I1008 17:58:30.320783  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:58:30.320822  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.321070  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:58:30.321097  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.323268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323601  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.323630  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323770  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.323934  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.324073  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.324173  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.408962  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:58:30.413084  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:58:30.413110  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:58:30.413178  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:58:30.413266  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:58:30.413279  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:58:30.413385  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:58:30.423213  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:30.446502  548894 start.go:296] duration metric: took 125.715217ms for postStartSetup
	I1008 17:58:30.446572  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:30.447199  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.449851  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450235  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.450268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450469  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:30.450701  548894 start.go:128] duration metric: took 24.675682473s to createHost
	I1008 17:58:30.450743  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.453038  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453348  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.453375  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.453697  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.453857  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.454010  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.454159  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.454400  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.454410  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:58:30.559077  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410310.517666608
	
	I1008 17:58:30.559107  548894 fix.go:216] guest clock: 1728410310.517666608
	I1008 17:58:30.559114  548894 fix.go:229] Guest: 2024-10-08 17:58:30.517666608 +0000 UTC Remote: 2024-10-08 17:58:30.45071757 +0000 UTC m=+71.541677784 (delta=66.949038ms)
	I1008 17:58:30.559131  548894 fix.go:200] guest clock delta is within tolerance: 66.949038ms
	I1008 17:58:30.559136  548894 start.go:83] releasing machines lock for "ha-094095-m02", held for 24.78424013s
	I1008 17:58:30.559157  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.559409  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.562379  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.562717  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.562741  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.564989  548894 out.go:177] * Found network options:
	I1008 17:58:30.566270  548894 out.go:177]   - NO_PROXY=192.168.39.99
	W1008 17:58:30.567463  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.567496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568070  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568303  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568423  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:58:30.568473  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	W1008 17:58:30.568503  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.568602  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:58:30.568624  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.570953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571141  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571291  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571315  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571468  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571489  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571498  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571671  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572011  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572054  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.572151  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.807329  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:58:30.813213  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:58:30.813287  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:58:30.829683  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:58:30.829708  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:58:30.829790  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:58:30.845021  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:58:30.858172  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:58:30.858226  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:58:30.871442  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:58:30.884200  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:58:31.001594  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:58:31.145565  548894 docker.go:233] disabling docker service ...
	I1008 17:58:31.145647  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:58:31.159802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:58:31.172545  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:58:31.317614  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:58:31.428085  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:58:31.441474  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:58:31.458921  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:58:31.458992  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.469332  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:58:31.469401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.479553  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.489606  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.499476  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:58:31.509618  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.519561  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.536177  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.546145  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:58:31.555445  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:58:31.555504  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:58:31.568401  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:58:31.577660  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:31.690206  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:58:31.785577  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:58:31.785668  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:58:31.790440  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:58:31.790488  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:58:31.794008  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:58:31.830698  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:58:31.830779  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.860448  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.888491  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:58:31.889686  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:58:31.890999  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:31.893749  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894085  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:31.894111  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894298  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:58:31.898872  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:31.911229  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:58:31.911431  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:31.911784  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.911827  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.926475  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I1008 17:58:31.926940  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.927427  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.927446  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.927739  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.927928  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:31.929331  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:31.929604  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.929636  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.944569  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1008 17:58:31.945071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.945554  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.945577  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.945884  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.946077  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:31.946243  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.65
	I1008 17:58:31.946257  548894 certs.go:194] generating shared ca certs ...
	I1008 17:58:31.946274  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:31.946447  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:58:31.946488  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:58:31.946503  548894 certs.go:256] generating profile certs ...
	I1008 17:58:31.946591  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:58:31.946614  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9
	I1008 17:58:31.946631  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.254]
	I1008 17:58:32.004758  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 ...
	I1008 17:58:32.004782  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9: {Name:mk5f5c650d9dd5d2249fb843b585c028b52aecec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.004936  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 ...
	I1008 17:58:32.004948  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9: {Name:mk72de6dbb470530f019dc623057311deeb636c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.005014  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:58:32.005145  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:58:32.005267  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:58:32.005283  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:58:32.005296  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:58:32.005308  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:58:32.005321  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:58:32.005335  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:58:32.005348  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:58:32.005359  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:58:32.005370  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:58:32.005421  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:58:32.005451  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:58:32.005460  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:58:32.005496  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:58:32.005520  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:58:32.005541  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:58:32.005579  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:32.005605  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.005619  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.005631  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.005665  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:32.008694  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009085  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:32.009115  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009227  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:32.009422  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:32.009576  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:32.009716  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:32.082578  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:58:32.087536  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:58:32.098777  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:58:32.102888  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:58:32.112522  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:58:32.116400  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:58:32.126625  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:58:32.130706  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:58:32.141238  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:58:32.145206  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:58:32.154909  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:58:32.159011  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:58:32.169341  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:58:32.193388  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:58:32.215733  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:58:32.237995  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:58:32.260545  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 17:58:32.283295  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 17:58:32.305577  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:58:32.327963  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:58:32.350081  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:58:32.372344  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:58:32.394280  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:58:32.416064  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:58:32.431348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:58:32.446729  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:58:32.462348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:58:32.479908  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:58:32.495280  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:58:32.510638  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:58:32.526014  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:58:32.531514  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:58:32.541262  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545663  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545708  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.551139  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:58:32.561010  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:58:32.570960  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575030  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575086  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.580417  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:58:32.590088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:58:32.600566  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604834  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604876  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.610374  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:58:32.620430  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:58:32.624404  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:58:32.624460  548894 kubeadm.go:934] updating node {m02 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1008 17:58:32.624566  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:58:32.624597  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:58:32.624632  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:58:32.640207  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:58:32.640276  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:58:32.640318  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.651418  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:58:32.651482  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.660840  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:58:32.660867  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660925  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660955  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1008 17:58:32.660974  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1008 17:58:32.665332  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:58:32.665355  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:58:33.330557  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.330641  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.335582  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:58:33.335623  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:58:33.372522  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:58:33.392996  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.393114  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.400473  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:58:33.400509  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:58:33.862223  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:58:33.873974  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 17:58:33.890552  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:58:33.907049  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:58:33.923719  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:58:33.927643  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:33.940952  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:34.068619  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:34.085108  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:34.085464  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:34.085525  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:34.100590  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1008 17:58:34.101071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:34.101641  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:34.101663  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:34.101990  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:34.102197  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:34.102362  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:58:34.102466  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:58:34.102489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:34.105069  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105405  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:34.105432  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105659  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:34.105846  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:34.106036  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:34.106174  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:34.253303  548894 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:34.253365  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443"
	I1008 17:58:55.647352  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443": (21.393954296s)
	I1008 17:58:55.647399  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 17:58:56.179900  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m02 minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 17:58:56.351414  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 17:58:56.472891  548894 start.go:319] duration metric: took 22.370522266s to joinCluster
	I1008 17:58:56.472999  548894 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:56.473310  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:56.474358  548894 out.go:177] * Verifying Kubernetes components...
	I1008 17:58:56.475511  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:56.748460  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:56.780862  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:56.781184  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 17:58:56.781253  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 17:58:56.781476  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:58:56.781593  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:56.781601  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:56.781608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:56.781612  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:56.791092  548894 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1008 17:58:57.281764  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.281787  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.281795  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.281800  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.293233  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:58:57.782526  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.782566  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.782571  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.786781  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.281871  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.281899  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.281911  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.281917  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.285022  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:58:58.781938  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.781972  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.781983  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.781989  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.786159  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.786795  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:58:59.282562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.282596  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.282609  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.282619  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.286768  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:59.781827  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.781856  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.781867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.781872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.785211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:00.282380  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.282406  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.282417  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.282424  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.285358  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:00.782500  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.782529  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.782538  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.782541  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.785321  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.281680  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.281702  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.281711  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.281717  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.284371  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.285041  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:01.782411  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.782443  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.782453  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.782458  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.785485  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.282181  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.282203  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.282212  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.282217  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.285355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.782528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.782565  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.782571  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.785688  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.282604  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.282627  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.282638  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.282646  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.286199  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.286918  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:03.782407  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.782431  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.782441  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.782447  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.785212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:04.282369  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.282392  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.282400  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.282404  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.285540  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:04.781799  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.781818  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.781831  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.781835  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.785050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.282133  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.282156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.282163  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.282166  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.285211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.782060  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.782079  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.782090  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.782097  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.784932  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:05.785622  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:06.282491  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.282513  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.282521  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.282524  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.285446  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:06.782400  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.782424  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.782433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.782439  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.787263  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:07.282189  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.282221  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.282227  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.282231  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.285027  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:07.781864  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.781885  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.781895  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.781901  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.784237  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:08.281994  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.282014  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.282022  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.282027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.285398  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:08.286042  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:08.782428  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.782454  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.782466  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.782472  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.785709  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.282163  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.282193  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.282204  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.282211  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.285429  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.782392  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.782415  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.782423  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.782427  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.785404  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.282376  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.282398  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.282407  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.282410  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.293860  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:59:10.295059  548894 node_ready.go:49] node "ha-094095-m02" has status "Ready":"True"
	I1008 17:59:10.295090  548894 node_ready.go:38] duration metric: took 13.513574743s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:59:10.295105  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:10.295211  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:10.295228  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.295239  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.295243  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.309090  548894 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1008 17:59:10.317441  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.317556  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 17:59:10.317568  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.317578  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.317586  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.321472  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.322135  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.322156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.322167  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.322174  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.328845  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.329380  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.329405  548894 pod_ready.go:82] duration metric: took 11.930599ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329419  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329498  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 17:59:10.329509  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.329520  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.329528  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.336402  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.337294  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.337313  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.337323  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.337328  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.340848  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.341320  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.341341  548894 pod_ready.go:82] duration metric: took 11.909652ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341354  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341421  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 17:59:10.341432  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.341442  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.341450  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.343586  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.344175  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.344191  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.344198  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.344202  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.346350  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.347112  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.347134  548894 pod_ready.go:82] duration metric: took 5.772495ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347147  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347220  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 17:59:10.347231  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.347241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.347249  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.349293  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.349880  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.349897  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.349916  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.349921  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.352009  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.352470  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.352496  548894 pod_ready.go:82] duration metric: took 5.340167ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.352518  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.482865  548894 request.go:632] Waited for 130.276413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482957  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482968  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.482977  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.482983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.486050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.683204  548894 request.go:632] Waited for 196.383245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683286  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683291  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.683299  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.683302  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.686545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.687112  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.687134  548894 pod_ready.go:82] duration metric: took 334.609013ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.687145  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.882406  548894 request.go:632] Waited for 195.187252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882484  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882489  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.882498  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.882503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.885610  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.082756  548894 request.go:632] Waited for 196.397183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082846  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082857  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.082869  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.082874  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.085950  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.086623  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.086650  548894 pod_ready.go:82] duration metric: took 399.497445ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.086663  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.282438  548894 request.go:632] Waited for 195.669677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282535  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282544  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.282552  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.282557  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.285746  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.482936  548894 request.go:632] Waited for 196.360528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483014  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483021  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.483030  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.483037  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.486267  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.486823  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.486845  548894 pod_ready.go:82] duration metric: took 400.172946ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.486856  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.683063  548894 request.go:632] Waited for 196.099154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683155  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683168  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.683181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.683192  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.686310  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.882490  548894 request.go:632] Waited for 195.281424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882569  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.882580  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.882587  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.885732  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.886206  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.886228  548894 pod_ready.go:82] duration metric: took 399.364956ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.886243  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.083083  548894 request.go:632] Waited for 196.741087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083174  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083181  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.083193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.083199  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.086438  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.282815  548894 request.go:632] Waited for 195.357265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282879  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282884  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.282892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.282897  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.286211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.286955  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.286978  548894 pod_ready.go:82] duration metric: took 400.728245ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.286989  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.483080  548894 request.go:632] Waited for 196.002385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483159  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483167  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.483181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.483193  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.486235  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.683233  548894 request.go:632] Waited for 196.354052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683315  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683322  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.683334  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.683341  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.686419  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.687164  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.687194  548894 pod_ready.go:82] duration metric: took 400.198282ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.687210  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.883073  548894 request.go:632] Waited for 195.753943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883139  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883145  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.883152  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.883156  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.886291  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.083210  548894 request.go:632] Waited for 196.369192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083288  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083296  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.083304  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.083308  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.086479  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.087168  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.087188  548894 pod_ready.go:82] duration metric: took 399.968628ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.087198  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.283359  548894 request.go:632] Waited for 196.068525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283420  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283425  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.283433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.283438  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.286484  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.482457  548894 request.go:632] Waited for 195.25665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482575  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482588  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.482599  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.482605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.485671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.486395  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.486417  548894 pod_ready.go:82] duration metric: took 399.212171ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.486429  548894 pod_ready.go:39] duration metric: took 3.191309926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:13.486448  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 17:59:13.486516  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 17:59:13.501134  548894 api_server.go:72] duration metric: took 17.028092431s to wait for apiserver process to appear ...
	I1008 17:59:13.501165  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 17:59:13.501208  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 17:59:13.505717  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 17:59:13.506345  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 17:59:13.506369  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.506381  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.506389  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.508475  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:13.508579  548894 api_server.go:141] control plane version: v1.31.1
	I1008 17:59:13.508596  548894 api_server.go:131] duration metric: took 7.424538ms to wait for apiserver health ...
	I1008 17:59:13.508606  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 17:59:13.682454  548894 request.go:632] Waited for 173.762668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682527  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682532  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.682541  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.682546  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.687595  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 17:59:13.692646  548894 system_pods.go:59] 17 kube-system pods found
	I1008 17:59:13.692692  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:13.692702  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:13.692707  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:13.692713  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:13.692718  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:13.692723  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:13.692730  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:13.692735  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:13.692744  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:13.692750  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:13.692755  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:13.692760  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:13.692765  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:13.692774  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:13.692778  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:13.692783  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:13.692788  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:13.692796  548894 system_pods.go:74] duration metric: took 184.183414ms to wait for pod list to return data ...
	I1008 17:59:13.692811  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 17:59:13.883264  548894 request.go:632] Waited for 190.350103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883340  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883352  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.883364  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.883373  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.887200  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.887443  548894 default_sa.go:45] found service account: "default"
	I1008 17:59:13.887464  548894 default_sa.go:55] duration metric: took 194.642236ms for default service account to be created ...
	I1008 17:59:13.887473  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 17:59:14.083128  548894 request.go:632] Waited for 195.575348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083197  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083204  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.083215  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.083224  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.087502  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:14.091850  548894 system_pods.go:86] 17 kube-system pods found
	I1008 17:59:14.091874  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:14.091880  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:14.091884  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:14.091888  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:14.091895  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:14.091898  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:14.091903  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:14.091909  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:14.091915  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:14.091921  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:14.091929  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:14.091935  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:14.091943  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:14.091948  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:14.091954  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:14.091958  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:14.091961  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:14.091969  548894 system_pods.go:126] duration metric: took 204.490014ms to wait for k8s-apps to be running ...
	I1008 17:59:14.091978  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 17:59:14.092031  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:14.107751  548894 system_svc.go:56] duration metric: took 15.765669ms WaitForService to wait for kubelet
	I1008 17:59:14.107782  548894 kubeadm.go:582] duration metric: took 17.634744099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:59:14.107804  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 17:59:14.283342  548894 request.go:632] Waited for 175.43028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283397  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283402  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.283410  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.283415  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.286910  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:14.287827  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287854  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287877  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287883  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287892  548894 node_conditions.go:105] duration metric: took 180.082842ms to run NodePressure ...
	I1008 17:59:14.287908  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:59:14.287939  548894 start.go:255] writing updated cluster config ...
	I1008 17:59:14.289665  548894 out.go:201] 
	I1008 17:59:14.290934  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:14.291033  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.292598  548894 out.go:177] * Starting "ha-094095-m03" control-plane node in "ha-094095" cluster
	I1008 17:59:14.293602  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:59:14.293620  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:59:14.293722  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:59:14.293741  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:59:14.293865  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.294036  548894 start.go:360] acquireMachinesLock for ha-094095-m03: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:59:14.294084  548894 start.go:364] duration metric: took 28.442µs to acquireMachinesLock for "ha-094095-m03"
	I1008 17:59:14.294116  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:14.294207  548894 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1008 17:59:14.295495  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:59:14.295567  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:14.295608  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:14.310848  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I1008 17:59:14.311356  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:14.311872  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:14.311899  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:14.312212  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:14.312396  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:14.312674  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:14.312844  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:59:14.312876  548894 client.go:168] LocalClient.Create starting
	I1008 17:59:14.312902  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:59:14.312934  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.312948  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313000  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:59:14.313019  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.313027  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313042  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:59:14.313050  548894 main.go:141] libmachine: (ha-094095-m03) Calling .PreCreateCheck
	I1008 17:59:14.313206  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:14.313583  548894 main.go:141] libmachine: Creating machine...
	I1008 17:59:14.313600  548894 main.go:141] libmachine: (ha-094095-m03) Calling .Create
	I1008 17:59:14.313739  548894 main.go:141] libmachine: (ha-094095-m03) Creating KVM machine...
	I1008 17:59:14.314906  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing default KVM network
	I1008 17:59:14.315074  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing private KVM network mk-ha-094095
	I1008 17:59:14.315221  548894 main.go:141] libmachine: (ha-094095-m03) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.315247  548894 main.go:141] libmachine: (ha-094095-m03) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:59:14.315327  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.315217  549655 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.315388  548894 main.go:141] libmachine: (ha-094095-m03) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:59:14.593209  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.593087  549655 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa...
	I1008 17:59:14.821442  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821329  549655 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk...
	I1008 17:59:14.821476  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing magic tar header
	I1008 17:59:14.821491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing SSH key tar header
	I1008 17:59:14.821502  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821478  549655 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.821659  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03
	I1008 17:59:14.821694  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 (perms=drwx------)
	I1008 17:59:14.821705  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:59:14.821719  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.821729  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:59:14.821740  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:59:14.821750  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:59:14.821762  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:59:14.821772  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home
	I1008 17:59:14.821784  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:59:14.821794  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Skipping /home - not owner
	I1008 17:59:14.821808  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:59:14.821819  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:59:14.821836  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:59:14.821846  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:14.822739  548894 main.go:141] libmachine: (ha-094095-m03) define libvirt domain using xml: 
	I1008 17:59:14.822758  548894 main.go:141] libmachine: (ha-094095-m03) <domain type='kvm'>
	I1008 17:59:14.822767  548894 main.go:141] libmachine: (ha-094095-m03)   <name>ha-094095-m03</name>
	I1008 17:59:14.822774  548894 main.go:141] libmachine: (ha-094095-m03)   <memory unit='MiB'>2200</memory>
	I1008 17:59:14.822782  548894 main.go:141] libmachine: (ha-094095-m03)   <vcpu>2</vcpu>
	I1008 17:59:14.822792  548894 main.go:141] libmachine: (ha-094095-m03)   <features>
	I1008 17:59:14.822799  548894 main.go:141] libmachine: (ha-094095-m03)     <acpi/>
	I1008 17:59:14.822805  548894 main.go:141] libmachine: (ha-094095-m03)     <apic/>
	I1008 17:59:14.822815  548894 main.go:141] libmachine: (ha-094095-m03)     <pae/>
	I1008 17:59:14.822822  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.822827  548894 main.go:141] libmachine: (ha-094095-m03)   </features>
	I1008 17:59:14.822834  548894 main.go:141] libmachine: (ha-094095-m03)   <cpu mode='host-passthrough'>
	I1008 17:59:14.822838  548894 main.go:141] libmachine: (ha-094095-m03)   
	I1008 17:59:14.822842  548894 main.go:141] libmachine: (ha-094095-m03)   </cpu>
	I1008 17:59:14.822847  548894 main.go:141] libmachine: (ha-094095-m03)   <os>
	I1008 17:59:14.822857  548894 main.go:141] libmachine: (ha-094095-m03)     <type>hvm</type>
	I1008 17:59:14.822865  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='cdrom'/>
	I1008 17:59:14.822879  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='hd'/>
	I1008 17:59:14.822888  548894 main.go:141] libmachine: (ha-094095-m03)     <bootmenu enable='no'/>
	I1008 17:59:14.822897  548894 main.go:141] libmachine: (ha-094095-m03)   </os>
	I1008 17:59:14.822903  548894 main.go:141] libmachine: (ha-094095-m03)   <devices>
	I1008 17:59:14.822910  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='cdrom'>
	I1008 17:59:14.822919  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/boot2docker.iso'/>
	I1008 17:59:14.822926  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hdc' bus='scsi'/>
	I1008 17:59:14.822931  548894 main.go:141] libmachine: (ha-094095-m03)       <readonly/>
	I1008 17:59:14.822939  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.822951  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='disk'>
	I1008 17:59:14.822984  548894 main.go:141] libmachine: (ha-094095-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:59:14.822998  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk'/>
	I1008 17:59:14.823004  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hda' bus='virtio'/>
	I1008 17:59:14.823008  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.823012  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823018  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='mk-ha-094095'/>
	I1008 17:59:14.823028  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823037  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823050  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823062  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='default'/>
	I1008 17:59:14.823072  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823080  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823089  548894 main.go:141] libmachine: (ha-094095-m03)     <serial type='pty'>
	I1008 17:59:14.823097  548894 main.go:141] libmachine: (ha-094095-m03)       <target port='0'/>
	I1008 17:59:14.823105  548894 main.go:141] libmachine: (ha-094095-m03)     </serial>
	I1008 17:59:14.823114  548894 main.go:141] libmachine: (ha-094095-m03)     <console type='pty'>
	I1008 17:59:14.823128  548894 main.go:141] libmachine: (ha-094095-m03)       <target type='serial' port='0'/>
	I1008 17:59:14.823139  548894 main.go:141] libmachine: (ha-094095-m03)     </console>
	I1008 17:59:14.823147  548894 main.go:141] libmachine: (ha-094095-m03)     <rng model='virtio'>
	I1008 17:59:14.823159  548894 main.go:141] libmachine: (ha-094095-m03)       <backend model='random'>/dev/random</backend>
	I1008 17:59:14.823166  548894 main.go:141] libmachine: (ha-094095-m03)     </rng>
	I1008 17:59:14.823173  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823181  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823189  548894 main.go:141] libmachine: (ha-094095-m03)   </devices>
	I1008 17:59:14.823202  548894 main.go:141] libmachine: (ha-094095-m03) </domain>
	I1008 17:59:14.823214  548894 main.go:141] libmachine: (ha-094095-m03) 
	I1008 17:59:14.829896  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:d4:34:b1 in network default
	I1008 17:59:14.830619  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:14.830642  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring networks are active...
	I1008 17:59:14.831385  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network default is active
	I1008 17:59:14.831784  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network mk-ha-094095 is active
	I1008 17:59:14.832205  548894 main.go:141] libmachine: (ha-094095-m03) Getting domain xml...
	I1008 17:59:14.832929  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:16.039421  548894 main.go:141] libmachine: (ha-094095-m03) Waiting to get IP...
	I1008 17:59:16.040212  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.040604  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.040627  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.040576  549655 retry.go:31] will retry after 310.617511ms: waiting for machine to come up
	I1008 17:59:16.353098  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.353638  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.353666  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.353600  549655 retry.go:31] will retry after 370.013025ms: waiting for machine to come up
	I1008 17:59:16.725039  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.725471  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.725511  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.725419  549655 retry.go:31] will retry after 335.057817ms: waiting for machine to come up
	I1008 17:59:17.061762  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.062145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.062168  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.062095  549655 retry.go:31] will retry after 553.959397ms: waiting for machine to come up
	I1008 17:59:17.617869  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.618404  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.618431  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.618345  549655 retry.go:31] will retry after 506.335647ms: waiting for machine to come up
	I1008 17:59:18.125977  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.126353  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.126384  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.126291  549655 retry.go:31] will retry after 734.408354ms: waiting for machine to come up
	I1008 17:59:18.862107  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.862605  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.862632  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.862544  549655 retry.go:31] will retry after 1.020122482s: waiting for machine to come up
	I1008 17:59:19.884038  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:19.884492  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:19.884530  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:19.884425  549655 retry.go:31] will retry after 1.125801014s: waiting for machine to come up
	I1008 17:59:21.011532  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:21.011993  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:21.012020  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:21.011944  549655 retry.go:31] will retry after 1.660141079s: waiting for machine to come up
	I1008 17:59:22.673143  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:22.673540  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:22.673570  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:22.673522  549655 retry.go:31] will retry after 1.579793422s: waiting for machine to come up
	I1008 17:59:24.255498  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:24.256062  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:24.256089  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:24.256014  549655 retry.go:31] will retry after 2.586780396s: waiting for machine to come up
	I1008 17:59:26.845780  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:26.846232  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:26.846256  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:26.846181  549655 retry.go:31] will retry after 2.461770006s: waiting for machine to come up
	I1008 17:59:29.309639  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:29.310146  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:29.310176  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:29.310088  549655 retry.go:31] will retry after 4.519355473s: waiting for machine to come up
	I1008 17:59:33.833985  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:33.834361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:33.834386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:33.834293  549655 retry.go:31] will retry after 3.493644498s: waiting for machine to come up
	I1008 17:59:37.331421  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.331914  548894 main.go:141] libmachine: (ha-094095-m03) Found IP for machine: 192.168.39.194
	I1008 17:59:37.331939  548894 main.go:141] libmachine: (ha-094095-m03) Reserving static IP address...
	I1008 17:59:37.331956  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has current primary IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.332395  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find host DHCP lease matching {name: "ha-094095-m03", mac: "52:54:00:e6:8f:e3", ip: "192.168.39.194"} in network mk-ha-094095
	I1008 17:59:37.404136  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Getting to WaitForSSH function...
	I1008 17:59:37.404175  548894 main.go:141] libmachine: (ha-094095-m03) Reserved static IP address: 192.168.39.194
	I1008 17:59:37.404188  548894 main.go:141] libmachine: (ha-094095-m03) Waiting for SSH to be available...
	I1008 17:59:37.406755  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407114  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.407145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407257  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH client type: external
	I1008 17:59:37.407295  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa (-rw-------)
	I1008 17:59:37.407348  548894 main.go:141] libmachine: (ha-094095-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:59:37.407377  548894 main.go:141] libmachine: (ha-094095-m03) DBG | About to run SSH command:
	I1008 17:59:37.407391  548894 main.go:141] libmachine: (ha-094095-m03) DBG | exit 0
	I1008 17:59:37.534234  548894 main.go:141] libmachine: (ha-094095-m03) DBG | SSH cmd err, output: <nil>: 
	I1008 17:59:37.534542  548894 main.go:141] libmachine: (ha-094095-m03) KVM machine creation complete!
	I1008 17:59:37.535062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:37.535615  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.535835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.536043  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:59:37.536062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetState
	I1008 17:59:37.537459  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:59:37.537477  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:59:37.537484  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:59:37.537492  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.539962  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540458  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.540491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540661  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.540847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.540985  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.541188  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.541386  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.541674  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.541690  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:59:37.649416  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:37.649443  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:59:37.649452  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.652360  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652754  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.652783  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652904  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.653099  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653253  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653372  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.653521  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.653691  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.653700  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:59:37.763719  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:59:37.763801  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:59:37.763820  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:59:37.763835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764121  548894 buildroot.go:166] provisioning hostname "ha-094095-m03"
	I1008 17:59:37.764156  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764347  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.766798  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.767194  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.767617  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767784  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767982  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.768161  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.768362  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.768381  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m03 && echo "ha-094095-m03" | sudo tee /etc/hostname
	I1008 17:59:37.892598  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m03
	
	I1008 17:59:37.892638  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.895717  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896104  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.896139  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896357  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.896582  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896764  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896930  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.897130  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.897346  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.897371  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:59:38.015892  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:38.015942  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:59:38.015964  548894 buildroot.go:174] setting up certificates
	I1008 17:59:38.015976  548894 provision.go:84] configureAuth start
	I1008 17:59:38.015994  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:38.016285  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.018925  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019329  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.019361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019480  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.021681  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022085  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.022109  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022295  548894 provision.go:143] copyHostCerts
	I1008 17:59:38.022355  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022398  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:59:38.022410  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022497  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:59:38.022612  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022639  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:59:38.022646  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022684  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:59:38.022749  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022772  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:59:38.022780  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022817  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:59:38.022905  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m03 san=[127.0.0.1 192.168.39.194 ha-094095-m03 localhost minikube]
	I1008 17:59:38.409825  548894 provision.go:177] copyRemoteCerts
	I1008 17:59:38.409880  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:59:38.409906  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.412474  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.412819  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.412850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.413057  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.413233  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.413436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.413614  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.500707  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:59:38.500793  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:59:38.526942  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:59:38.527009  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:59:38.552205  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:59:38.552273  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 17:59:38.575397  548894 provision.go:87] duration metric: took 559.401387ms to configureAuth
	I1008 17:59:38.575426  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:59:38.575799  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:38.575895  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.579241  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579746  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.579778  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579962  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.580162  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580375  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580557  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.580756  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.580976  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.581001  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:59:38.814916  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:59:38.814943  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:59:38.814951  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetURL
	I1008 17:59:38.816195  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using libvirt version 6000000
	I1008 17:59:38.818782  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.819181  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819313  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:59:38.819324  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:59:38.819331  548894 client.go:171] duration metric: took 24.506447945s to LocalClient.Create
	I1008 17:59:38.819354  548894 start.go:167] duration metric: took 24.506513664s to libmachine.API.Create "ha-094095"
	I1008 17:59:38.819366  548894 start.go:293] postStartSetup for "ha-094095-m03" (driver="kvm2")
	I1008 17:59:38.819379  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:59:38.819402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:38.819667  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:59:38.819695  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.822386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.822850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.822878  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.823079  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.823255  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.823425  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.823576  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.911016  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:59:38.915516  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:59:38.915544  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:59:38.915616  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:59:38.915703  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:59:38.915717  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:59:38.915843  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:59:38.927016  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:38.951613  548894 start.go:296] duration metric: took 132.232716ms for postStartSetup
	I1008 17:59:38.951663  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:38.952254  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.954773  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955177  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.955206  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955479  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:38.955726  548894 start.go:128] duration metric: took 24.661507137s to createHost
	I1008 17:59:38.955754  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.957824  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958152  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.958180  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958260  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.958436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958614  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958783  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.958982  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.959149  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.959198  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:59:39.066802  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410379.042145365
	
	I1008 17:59:39.066831  548894 fix.go:216] guest clock: 1728410379.042145365
	I1008 17:59:39.066838  548894 fix.go:229] Guest: 2024-10-08 17:59:39.042145365 +0000 UTC Remote: 2024-10-08 17:59:38.955741605 +0000 UTC m=+140.046701810 (delta=86.40376ms)
	I1008 17:59:39.066854  548894 fix.go:200] guest clock delta is within tolerance: 86.40376ms
	I1008 17:59:39.066859  548894 start.go:83] releasing machines lock for "ha-094095-m03", held for 24.772764688s
	I1008 17:59:39.066879  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.067121  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:39.069711  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.070086  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.070113  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.072386  548894 out.go:177] * Found network options:
	I1008 17:59:39.073842  548894 out.go:177]   - NO_PROXY=192.168.39.99,192.168.39.65
	W1008 17:59:39.075265  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.075288  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.075301  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.075811  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076009  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076099  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:59:39.076150  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	W1008 17:59:39.076202  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.076228  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.076306  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:59:39.076328  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:39.078554  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.078807  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079018  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079043  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079229  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079324  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079350  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079420  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.079542  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079593  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.079786  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.079847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.080000  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.080138  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.318698  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:59:39.324927  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:59:39.324990  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:59:39.343637  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:59:39.343660  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:59:39.343717  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:59:39.360309  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:59:39.373825  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:59:39.373881  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:59:39.387260  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:59:39.400202  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:59:39.520831  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:59:39.680675  548894 docker.go:233] disabling docker service ...
	I1008 17:59:39.680761  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:59:39.695394  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:59:39.710367  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:59:39.839252  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:59:39.972794  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:59:39.988321  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:59:40.006947  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:59:40.007031  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.018072  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:59:40.018137  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.029758  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.040612  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.051467  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:59:40.062960  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.074528  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.091933  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.101742  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:59:40.111189  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:59:40.111232  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:59:40.123431  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:59:40.132781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:40.256434  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:59:40.349829  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:59:40.349903  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:59:40.354785  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:59:40.354842  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:59:40.358519  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:59:40.397714  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:59:40.397812  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.425086  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.452883  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:59:40.454244  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:59:40.455477  548894 out.go:177]   - env NO_PROXY=192.168.39.99,192.168.39.65
	I1008 17:59:40.456757  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:40.459422  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.459818  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:40.459840  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.460096  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:59:40.464498  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:40.479877  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:59:40.480107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:40.480402  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.480441  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.495933  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I1008 17:59:40.496453  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.496925  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.496949  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.497271  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.497471  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:59:40.499057  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:40.499430  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.499465  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.513547  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I1008 17:59:40.514005  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.514450  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.514473  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.514842  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.515015  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:40.515189  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.194
	I1008 17:59:40.515202  548894 certs.go:194] generating shared ca certs ...
	I1008 17:59:40.515221  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.515367  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:59:40.515423  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:59:40.515435  548894 certs.go:256] generating profile certs ...
	I1008 17:59:40.515545  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:59:40.515578  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d
	I1008 17:59:40.515597  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 17:59:40.734889  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d ...
	I1008 17:59:40.734923  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d: {Name:mkaac2d16400496ba6ef1c81a4206e8cf0480e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735091  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d ...
	I1008 17:59:40.735104  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d: {Name:mk3a55a29959b59f407eb97877f8ee016f652037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735177  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:59:40.735309  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:59:40.735433  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:59:40.735451  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:59:40.735464  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:59:40.735479  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:59:40.735491  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:59:40.735503  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:59:40.735514  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:59:40.735528  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:59:40.750415  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:59:40.750523  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:59:40.750564  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:59:40.750576  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:59:40.750597  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:59:40.750620  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:59:40.750642  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:59:40.750679  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:40.750709  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:40.750727  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:59:40.750739  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:59:40.750776  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:40.754187  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754657  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:40.754682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754891  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:40.755083  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:40.755214  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:40.755357  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:40.826678  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:59:40.831630  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:59:40.843594  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:59:40.848493  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:59:40.859904  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:59:40.864097  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:59:40.874362  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:59:40.878501  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:59:40.890535  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:59:40.895442  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:59:40.907886  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:59:40.911759  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:59:40.921878  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:59:40.947644  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:59:40.970914  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:59:40.993912  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:59:41.017348  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1008 17:59:41.040662  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:59:41.063411  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:59:41.086440  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:59:41.109681  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:59:41.132484  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:59:41.156226  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:59:41.178867  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:59:41.195488  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:59:41.212613  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:59:41.228807  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:59:41.246244  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:59:41.262224  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:59:41.277985  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:59:41.294525  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:59:41.300038  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:59:41.311084  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315442  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315488  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.321163  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:59:41.332088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:59:41.342926  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347780  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347833  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.353198  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:59:41.363300  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:59:41.373282  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377636  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377682  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.383451  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:59:41.393738  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:59:41.397604  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:59:41.397660  548894 kubeadm.go:934] updating node {m03 192.168.39.194 8443 v1.31.1 crio true true} ...
	I1008 17:59:41.397755  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:59:41.397799  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:59:41.397831  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:59:41.412820  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:59:41.412901  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:59:41.412955  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.422366  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:59:41.422410  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.431355  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:59:41.431384  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431397  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1008 17:59:41.431416  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431363  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431494  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:41.446391  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.446418  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:59:41.446444  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:59:41.446446  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:59:41.446463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:59:41.447018  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.480884  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:59:41.480970  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:59:42.313012  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:59:42.322438  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1008 17:59:42.338702  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:59:42.365144  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:59:42.382514  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:59:42.386113  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:42.397995  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:42.523088  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:59:42.540754  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:42.541257  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:42.541326  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:42.559172  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I1008 17:59:42.559678  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:42.560333  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:42.560360  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:42.560754  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:42.560977  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:42.561148  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:59:42.561320  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:59:42.561345  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:42.564781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565346  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:42.565377  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565645  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:42.565831  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:42.566030  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:42.566199  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:42.729842  548894 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:42.729907  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443"
	I1008 18:00:04.832594  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443": (22.102635583s)
	I1008 18:00:04.832637  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 18:00:05.279641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m03 minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 18:00:05.406989  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 18:00:05.528741  548894 start.go:319] duration metric: took 22.967581062s to joinCluster
	I1008 18:00:05.528848  548894 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:00:05.529236  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:00:05.530083  548894 out.go:177] * Verifying Kubernetes components...
	I1008 18:00:05.531162  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:00:05.714521  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:00:05.729813  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:00:05.730150  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 18:00:05.730231  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 18:00:05.730539  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:05.730633  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:05.730651  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:05.730664  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:05.730673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:05.734671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.231617  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.231641  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.231650  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.231655  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.234903  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.731584  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.731606  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.731615  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.731620  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.735426  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.231620  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.231630  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.231634  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.235355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.730822  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.730855  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.730867  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.730873  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.735340  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:07.736449  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:08.230853  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.230878  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.230887  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.230892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.234386  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:08.731681  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.731712  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.731722  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.731727  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.735243  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.231587  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.231609  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.231618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.231623  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.235294  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.731675  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.731700  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.731709  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.731713  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.735299  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.231249  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.231335  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.231353  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.231359  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.234866  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.235558  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:10.731835  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.731862  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.731876  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.731881  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.735185  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.231623  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.231632  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.231636  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.235238  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.731791  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.731826  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.731839  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.731845  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.735179  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.231312  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.231339  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.231350  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.231356  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.234779  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.235754  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:12.731629  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.731658  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.731669  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.731673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.735274  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.231468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.231492  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.231500  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.231503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.234905  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.731604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.731613  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.731618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.734788  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.231250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.231274  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.231282  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.231287  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.234694  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.731084  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.731109  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.731117  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.731121  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.735096  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.735874  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:15.231041  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.231070  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.231079  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.231083  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.234482  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:15.731250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.731276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.731288  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.731296  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.734547  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.230897  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.230919  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.230928  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.230937  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.234261  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.731599  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.731608  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.731612  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.735249  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.736046  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:17.231278  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.231302  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.231311  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.231316  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.234212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:17.731562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.731585  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.731594  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.731597  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.735391  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.231528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.231552  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.231561  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.231565  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.234777  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.731570  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.731593  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.731601  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.731608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.735359  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.736085  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:19.231579  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.231604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.231618  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.231622  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.234902  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:19.731112  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.731142  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.731155  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.731162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.734221  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.231563  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.231591  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.231600  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.231605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.234855  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.731738  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.731773  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.731785  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.731792  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.735486  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.231659  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.231685  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.231696  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.231705  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.234967  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.235427  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:21.730803  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.730829  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.730838  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.730843  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.734021  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.231586  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.231613  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.231624  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.231630  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.234981  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.731022  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.731056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.731064  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.731070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.734252  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.231192  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.231215  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.231223  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.231228  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.234975  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.235794  548894 node_ready.go:49] node "ha-094095-m03" has status "Ready":"True"
	I1008 18:00:23.235816  548894 node_ready.go:38] duration metric: took 17.50525839s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:23.235826  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:23.235893  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:23.235903  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.235914  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.235918  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.241231  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:23.248355  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.248435  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 18:00:23.248444  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.248452  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.248456  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.250946  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.251489  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.251502  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.251510  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.251515  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.253741  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.254169  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.254188  548894 pod_ready.go:82] duration metric: took 5.808287ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254199  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254280  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 18:00:23.254291  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.254300  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.254309  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.256714  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.257261  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.257276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.257283  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.257286  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.259498  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.260042  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.260061  548894 pod_ready.go:82] duration metric: took 5.850763ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260072  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 18:00:23.260143  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.260153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.260162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.262300  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.262973  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.262989  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.262999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.263005  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.265000  548894 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1008 18:00:23.265522  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.265544  548894 pod_ready.go:82] duration metric: took 5.464426ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265555  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265622  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 18:00:23.265634  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.265643  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.265648  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.267966  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.268468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:23.268479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.268486  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.268491  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.270736  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.271272  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.271290  548894 pod_ready.go:82] duration metric: took 5.727216ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.271300  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.431729  548894 request.go:632] Waited for 160.342792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431825  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431837  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.431850  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.431861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.438271  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:23.631298  548894 request.go:632] Waited for 192.164013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631383  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631391  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.631408  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.631433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.635040  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.635580  548894 pod_ready.go:93] pod "etcd-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.635599  548894 pod_ready.go:82] duration metric: took 364.291447ms for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.635618  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.831837  548894 request.go:632] Waited for 196.121278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831896  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831902  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.831909  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.831913  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.834801  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.031893  548894 request.go:632] Waited for 196.106655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031976  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031981  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.031989  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.031993  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.035406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.036144  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.036163  548894 pod_ready.go:82] duration metric: took 400.535944ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.036173  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.232096  548894 request.go:632] Waited for 195.798323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232173  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232180  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.232192  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.232201  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.235054  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.432054  548894 request.go:632] Waited for 196.298402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432116  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432121  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.432128  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.432132  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.435456  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.436205  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.436233  548894 pod_ready.go:82] duration metric: took 400.05192ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.436253  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.631271  548894 request.go:632] Waited for 194.926969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631366  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631374  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.631384  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.631390  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.635001  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.831928  548894 request.go:632] Waited for 195.938579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832009  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832015  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.832023  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.832027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.834879  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.835519  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.835541  548894 pod_ready.go:82] duration metric: took 399.279605ms for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.835556  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.031600  548894 request.go:632] Waited for 195.955469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031671  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031676  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.031684  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.031689  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.035187  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.231262  548894 request.go:632] Waited for 195.293412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231320  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231326  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.231339  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.231343  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.234515  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.235363  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.235391  548894 pod_ready.go:82] duration metric: took 399.824349ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.235422  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.431278  548894 request.go:632] Waited for 195.760337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431347  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431353  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.431375  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.431379  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.434406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.631990  548894 request.go:632] Waited for 196.659604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632053  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632058  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.632067  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.632070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.635545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.636227  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.636248  548894 pod_ready.go:82] duration metric: took 400.813116ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.636259  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.831790  548894 request.go:632] Waited for 195.428011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831873  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831885  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.831896  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.831903  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.835520  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.031847  548894 request.go:632] Waited for 195.394713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031926  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031931  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.031939  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.031943  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.034885  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:26.035588  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.035611  548894 pod_ready.go:82] duration metric: took 399.345696ms for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.035622  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.231657  548894 request.go:632] Waited for 195.935325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231715  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231720  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.231728  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.231732  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.234989  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.432143  548894 request.go:632] Waited for 196.401893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432242  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432253  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.432262  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.432270  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.435436  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.436096  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.436113  548894 pod_ready.go:82] duration metric: took 400.484447ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.436124  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.632222  548894 request.go:632] Waited for 196.022184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632309  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632317  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.632325  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.632332  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.636157  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.831362  548894 request.go:632] Waited for 194.278962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831419  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831424  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.831433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.831445  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.834670  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.835262  548894 pod_ready.go:93] pod "kube-proxy-krxss" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.835280  548894 pod_ready.go:82] duration metric: took 399.149562ms for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.835292  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.031407  548894 request.go:632] Waited for 196.014244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031471  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.031490  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.031499  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.034651  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.231683  548894 request.go:632] Waited for 196.28215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231743  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231750  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.231761  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.231766  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.234677  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:27.235361  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.235391  548894 pod_ready.go:82] duration metric: took 400.091229ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.235405  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.431237  548894 request.go:632] Waited for 195.72193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431329  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431337  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.431353  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.431360  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.434428  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.631604  548894 request.go:632] Waited for 196.391274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631664  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631669  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.631678  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.631683  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.635129  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.635990  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.636017  548894 pod_ready.go:82] duration metric: took 400.603779ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.636029  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.832057  548894 request.go:632] Waited for 195.932393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832129  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832137  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.832147  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.832152  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.835638  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.031786  548894 request.go:632] Waited for 195.242001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031845  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031850  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.031857  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.031861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.035281  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.035945  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.035968  548894 pod_ready.go:82] duration metric: took 399.926983ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.035978  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.232045  548894 request.go:632] Waited for 195.987112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232140  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.232148  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.232153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.235683  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.431773  548894 request.go:632] Waited for 195.354282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431855  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431860  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.431867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.431872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.435214  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.435815  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.435951  548894 pod_ready.go:82] duration metric: took 399.956305ms for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.435993  548894 pod_ready.go:39] duration metric: took 5.200153143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:28.436017  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:00:28.436094  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:00:28.452375  548894 api_server.go:72] duration metric: took 22.923490341s to wait for apiserver process to appear ...
	I1008 18:00:28.452398  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:00:28.452421  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 18:00:28.456918  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 18:00:28.456978  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 18:00:28.456986  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.456994  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.456999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.457742  548894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1008 18:00:28.457798  548894 api_server.go:141] control plane version: v1.31.1
	I1008 18:00:28.457809  548894 api_server.go:131] duration metric: took 5.40508ms to wait for apiserver health ...
	I1008 18:00:28.457822  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:00:28.632286  548894 request.go:632] Waited for 174.373411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632364  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632372  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.632382  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.632388  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.638836  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:28.647332  548894 system_pods.go:59] 24 kube-system pods found
	I1008 18:00:28.647367  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:28.647374  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:28.647379  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:28.647384  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:28.647389  548894 system_pods.go:61] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:28.647394  548894 system_pods.go:61] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:28.647399  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:28.647404  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:28.647409  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:28.647417  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:28.647426  548894 system_pods.go:61] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:28.647432  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:28.647439  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:28.647445  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:28.647451  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:28.647456  548894 system_pods.go:61] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:28.647463  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:28.647468  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:28.647476  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:28.647482  548894 system_pods.go:61] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:28.647489  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:28.647494  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:28.647499  548894 system_pods.go:61] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:28.647505  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:28.647514  548894 system_pods.go:74] duration metric: took 189.683627ms to wait for pod list to return data ...
	I1008 18:00:28.647529  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:00:28.831958  548894 request.go:632] Waited for 184.329764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832044  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.832067  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.832073  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.837077  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:28.837234  548894 default_sa.go:45] found service account: "default"
	I1008 18:00:28.837253  548894 default_sa.go:55] duration metric: took 189.716305ms for default service account to be created ...
	I1008 18:00:28.837265  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:00:29.031904  548894 request.go:632] Waited for 194.536031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031965  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031970  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.031979  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.031983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.037622  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:29.044999  548894 system_pods.go:86] 24 kube-system pods found
	I1008 18:00:29.045026  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:29.045032  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:29.045036  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:29.045039  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:29.045043  548894 system_pods.go:89] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:29.045046  548894 system_pods.go:89] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:29.045050  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:29.045053  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:29.045056  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:29.045059  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:29.045063  548894 system_pods.go:89] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:29.045066  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:29.045070  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:29.045076  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:29.045082  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:29.045086  548894 system_pods.go:89] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:29.045089  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:29.045093  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:29.045098  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:29.045104  548894 system_pods.go:89] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:29.045107  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:29.045111  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:29.045114  548894 system_pods.go:89] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:29.045117  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:29.045124  548894 system_pods.go:126] duration metric: took 207.850736ms to wait for k8s-apps to be running ...
	I1008 18:00:29.045133  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:00:29.045176  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:00:29.059678  548894 system_svc.go:56] duration metric: took 14.536958ms WaitForService to wait for kubelet
	I1008 18:00:29.059706  548894 kubeadm.go:582] duration metric: took 23.530822988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:00:29.059724  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:00:29.231880  548894 request.go:632] Waited for 172.048672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231961  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231966  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.231974  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.231981  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.238241  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:29.239300  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239332  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239347  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239353  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239361  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239366  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239371  548894 node_conditions.go:105] duration metric: took 179.642781ms to run NodePressure ...
	I1008 18:00:29.239392  548894 start.go:241] waiting for startup goroutines ...
	I1008 18:00:29.239417  548894 start.go:255] writing updated cluster config ...
	I1008 18:00:29.239708  548894 ssh_runner.go:195] Run: rm -f paused
	I1008 18:00:29.291443  548894 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:00:29.293244  548894 out.go:177] * Done! kubectl is now configured to use "ha-094095" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.658990218Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-n779r,Uid:d3a10d4a-6add-4642-961b-b7b00f9e363b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410431779985652,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T18:00:30.266893198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6c7xl,Uid:5be15582-d4c7-4ec3-95db-7f9b7db4280d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1728410297358103747,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T17:58:17.031751608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghz9x,Uid:a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410297357205428,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-10-08T17:58:17.036351692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:54520f81-08fe-4612-bef9-1fe0016c45ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410297355597197,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-08T17:58:17.037337141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&PodSandboxMetadata{Name:kube-proxy-gnmch,Uid:2e4ec0ad-049b-48e6-90b2-8b8430d821f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410284807011649,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-08T17:58:03.897237361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&PodSandboxMetadata{Name:kindnet-mclfx,Uid:fca2ce96-9193-48a5-9dc7-9d20bde6787f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410284802925523,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T17:58:03.882142734Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-094095,Uid:4ab63a85f4abc9ded81a3460d92ef212,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728410273569368635,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.99:8443,kubernetes.io/config.hash: 4ab63a85f4abc9ded81a3460d92ef212,kubernetes.io/config.seen: 2024-10-08T17:57:53.083050125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-094095,Uid:19b7e8dee4daa510f3f23034617cd71c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273552850399,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4da
a510f3f23034617cd71c,},Annotations:map[string]string{kubernetes.io/config.hash: 19b7e8dee4daa510f3f23034617cd71c,kubernetes.io/config.seen: 2024-10-08T17:57:53.083055839Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&PodSandboxMetadata{Name:etcd-ha-094095,Uid:22ef4792d58f06f8319e0939993449f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273547684723,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.99:2379,kubernetes.io/config.hash: 22ef4792d58f06f8319e0939993449f9,kubernetes.io/config.seen: 2024-10-08T17:57:53.083056812Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f021979b9e57f9b85a8710
325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-094095,Uid:2762c7155c0d46d981fd81220017a92c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273536917657,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2762c7155c0d46d981fd81220017a92c,kubernetes.io/config.seen: 2024-10-08T17:57:53.083054587Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-094095,Uid:87f977c77bded84c5cd8640a7d7c6034,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728410273535142157,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.con
tainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87f977c77bded84c5cd8640a7d7c6034,kubernetes.io/config.seen: 2024-10-08T17:57:53.083053476Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d7a862cb-934a-4957-842b-0c91940ee4a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.660133268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54887d71-eebe-4602-9ea0-699ffcb60b6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.660188833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54887d71-eebe-4602-9ea0-699ffcb60b6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.660885201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54887d71-eebe-4602-9ea0-699ffcb60b6b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.676929837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e207f74-5de1-4f8a-bda2-ea501ce40509 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.676986229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e207f74-5de1-4f8a-bda2-ea501ce40509 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.678107846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b769cda-d603-4e9b-9746-84f423d29216 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.678834598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660678814332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b769cda-d603-4e9b-9746-84f423d29216 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.679557639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0451e1-760b-4e60-bfac-62affba9a35d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.679605695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0451e1-760b-4e60-bfac-62affba9a35d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.679825807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c0451e1-760b-4e60-bfac-62affba9a35d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.717521788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00b7fc16-0ca9-434d-a96b-363111680a65 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.717609415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00b7fc16-0ca9-434d-a96b-363111680a65 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.718615531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f493cf40-c732-4fbb-9bec-46ebfc90356a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.719813112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660719789895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f493cf40-c732-4fbb-9bec-46ebfc90356a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.720873380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54475e79-f98c-42f3-8a6a-942fb0f1b48b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.720940840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54475e79-f98c-42f3-8a6a-942fb0f1b48b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.721682352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54475e79-f98c-42f3-8a6a-942fb0f1b48b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.760833660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a95ba9f-d90d-483b-b1fc-9c0f59a9344a name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.760898322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a95ba9f-d90d-483b-b1fc-9c0f59a9344a name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.764854881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a702593a-c587-40e9-b85e-88a4d059e63b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.765230252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660765211491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a702593a-c587-40e9-b85e-88a4d059e63b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.766146502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca582f75-6ef3-458a-a17a-902d2b2f0800 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.766195629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca582f75-6ef3-458a-a17a-902d2b2f0800 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:20 ha-094095 crio[659]: time="2024-10-08 18:04:20.766474459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca582f75-6ef3-458a-a17a-902d2b2f0800 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4f194cdf306a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   eaf6acce4786e       busybox-7dff88458-n779r
	079e7a8fee78f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   875cfacbeeb23       coredns-7c65d6cfc9-6c7xl
	1eb4935d542c2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   9d8f70dc17585       coredns-7c65d6cfc9-ghz9x
	dfdfc8735b822       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d884b794bcbf8       storage-provisioner
	17a4523dfe3c8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   c791fa497b85a       kindnet-mclfx
	347854044c294       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   29ed3e17d1aab       kube-proxy-gnmch
	8f117035b9a9a       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   13853a6e388f1       kube-vip-ha-094095
	9c418725a44b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b68b365f16def       etcd-ha-094095
	3b8241e00230e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c13c52688447       kube-apiserver-ha-094095
	0224d96e8ab1a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f021979b9e57f       kube-scheduler-ha-094095
	ec97e876ef66b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a2f40f00bb5ff       kube-controller-manager-ha-094095
	
	
	==> coredns [079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee] <==
	[INFO] 10.244.1.2:46939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173909s
	[INFO] 10.244.1.2:43197 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152065s
	[INFO] 10.244.0.4:54276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776636s
	[INFO] 10.244.0.4:42844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001027134s
	[INFO] 10.244.0.4:33552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087486s
	[INFO] 10.244.0.4:40894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128456s
	[INFO] 10.244.2.2:37156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090694s
	[INFO] 10.244.2.2:35975 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000342501s
	[INFO] 10.244.2.2:56819 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008022s
	[INFO] 10.244.2.2:40613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107574s
	[INFO] 10.244.1.2:38959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208641s
	[INFO] 10.244.0.4:58386 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011149s
	[INFO] 10.244.0.4:56827 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016311s
	[INFO] 10.244.0.4:52547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068216s
	[INFO] 10.244.0.4:59149 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077593s
	[INFO] 10.244.2.2:49444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156535s
	[INFO] 10.244.2.2:51787 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111699s
	[INFO] 10.244.2.2:52768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107964s
	[INFO] 10.244.2.2:53538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071551s
	[INFO] 10.244.1.2:52231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220976s
	[INFO] 10.244.0.4:45893 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145642s
	[INFO] 10.244.0.4:50564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012308s
	[INFO] 10.244.0.4:40912 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110407s
	[INFO] 10.244.2.2:48559 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182361s
	[INFO] 10.244.2.2:42189 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123843s
	
	
	==> coredns [1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02] <==
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000403051s
	[INFO] 10.244.2.2:33432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198542s
	[INFO] 10.244.2.2:43175 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00011602s
	[INFO] 10.244.2.2:39986 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00007233s
	[INFO] 10.244.2.2:43098 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001798194s
	[INFO] 10.244.1.2:51904 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006238586s
	[INFO] 10.244.1.2:39841 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245332s
	[INFO] 10.244.1.2:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010411466s
	[INFO] 10.244.0.4:36134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131817s
	[INFO] 10.244.0.4:60392 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136485s
	[INFO] 10.244.0.4:47750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001276s
	[INFO] 10.244.0.4:53066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112589s
	[INFO] 10.244.2.2:50951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171312s
	[INFO] 10.244.2.2:36151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001719697s
	[INFO] 10.244.2.2:59876 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00134295s
	[INFO] 10.244.2.2:34156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121408s
	[INFO] 10.244.1.2:40835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210172s
	[INFO] 10.244.1.2:35561 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210453s
	[INFO] 10.244.1.2:58285 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:57787 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236305s
	[INFO] 10.244.1.2:52947 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185701s
	[INFO] 10.244.1.2:38121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000200581s
	[INFO] 10.244.0.4:37934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195898s
	[INFO] 10.244.2.2:51605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210836s
	[INFO] 10.244.2.2:44666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117181s
	
	
	==> describe nodes <==
	Name:               ha-094095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:57:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-094095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f253fb8c294514826ad247cbfc784d
	  System UUID:                14f253fb-8c29-4514-826a-d247cbfc784d
	  Boot ID:                    6cdd0146-42c4-4814-93e6-3af5699e77ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-n779r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-6c7xl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-ghz9x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-094095                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-mclfx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-094095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-094095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-gnmch                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-094095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-094095                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m15s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-094095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-094095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-094095 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  NodeReady                6m5s   kubelet          Node ha-094095 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	
	
	Name:               ha-094095-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:01:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-094095-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6846904a528149b4bec4ab05607145f5
	  System UUID:                6846904a-5281-49b4-bec4-ab05607145f5
	  Boot ID:                    92a2dec0-2bc9-44db-94e9-e4a68690b144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxdk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-094095-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-f5x42                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-094095-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-094095-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-r55hk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-094095-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-094095-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node ha-094095-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-094095-m02 status is now: NodeNotReady
	
	
	Name:               ha-094095-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    ha-094095-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cca5410c10d94705a0a750a2a36dfcf7
	  System UUID:                cca5410c-10d9-4705-a0a7-50a2a36dfcf7
	  Boot ID:                    a52600ea-f5af-4184-95ce-18bc5a4ff10e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rxwcg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-094095-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-8v7s4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-094095-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-094095-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-krxss                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-094095-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-094095-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-094095-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	
	
	Name:               ha-094095-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_01_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:01:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-094095-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6fe409be99242ac858632e59843d080
	  System UUID:                c6fe409b-e992-42ac-8586-32e59843d080
	  Boot ID:                    10df0150-6a8d-4d3e-8551-af1fe0638414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jhqlp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-jjgsh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m12s)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m12s)  kubelet          Node ha-094095-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m12s)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-094095-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 17:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050015] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039380] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.822235] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417178] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.589695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.867596] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.064259] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063997] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.185531] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.116355] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.250177] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.801506] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.578485] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.057293] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117363] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.084526] kauditd_printk_skb: 79 callbacks suppressed
	[Oct 8 17:58] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.243247] kauditd_printk_skb: 28 callbacks suppressed
	[ +42.891327] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7] <==
	{"level":"warn","ts":"2024-10-08T18:04:21.016143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.019736Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.021024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.030832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.036160Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.042876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.047240Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.050665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.058541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.064091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.070769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.080850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.084028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.113488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.115346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.119259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.128135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.133538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.138845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.141809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.144706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.147708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.152686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.157820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:21.219435Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:04:21 up 6 min,  0 users,  load average: 0.39, 0.38, 0.19
	Linux ha-094095 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a] <==
	I1008 18:03:46.529884       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:03:56.530637       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:03:56.530728       1 main.go:299] handling current node
	I1008 18:03:56.530780       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:03:56.530799       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:03:56.530947       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:03:56.530969       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:03:56.531022       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:03:56.531040       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:06.521023       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:06.521156       1 main.go:299] handling current node
	I1008 18:04:06.521246       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:06.521314       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:06.521746       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:06.521831       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:06.522370       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:06.522563       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:16.529710       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:16.529904       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:16.530111       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:16.530143       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:16.530205       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:16.530224       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:16.530303       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:16.530322       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b] <==
	I1008 17:57:58.485779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 17:57:58.491495       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.99]
	I1008 17:57:58.492135       1 controller.go:615] quota admission added evaluator for: endpoints
	I1008 17:57:58.499200       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 17:57:58.903637       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 17:58:00.054350       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 17:58:00.074068       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 17:58:00.230930       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 17:58:03.854509       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1008 17:58:03.954697       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1008 18:00:38.037771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45714: use of closed network connection
	E1008 18:00:38.232043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45744: use of closed network connection
	E1008 18:00:38.418256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45748: use of closed network connection
	E1008 18:00:38.622516       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45768: use of closed network connection
	E1008 18:00:38.796785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45788: use of closed network connection
	E1008 18:00:38.988513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45812: use of closed network connection
	E1008 18:00:39.174560       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45828: use of closed network connection
	E1008 18:00:39.350317       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45850: use of closed network connection
	E1008 18:00:39.525813       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45854: use of closed network connection
	E1008 18:00:39.828048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49850: use of closed network connection
	E1008 18:00:40.000068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49874: use of closed network connection
	E1008 18:00:40.192753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49888: use of closed network connection
	E1008 18:00:40.379456       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49904: use of closed network connection
	E1008 18:00:40.562970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49918: use of closed network connection
	E1008 18:00:40.742948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49938: use of closed network connection
	
	
	==> kube-controller-manager [ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb] <==
	I1008 18:01:09.767306       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-094095-m04" podCIDRs=["10.244.3.0/24"]
	I1008 18:01:09.767482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.015142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.174634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.537159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.265250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.321671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716760       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-094095-m04"
	I1008 18:01:13.777151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:20.033294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108639       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:01:28.124876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.732886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:40.603842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:02:28.755242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.757889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:02:28.778675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.891800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.567817ms"
	I1008 18:02:28.891887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.019µs"
	I1008 18:02:30.013028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:33.959772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	
	
	==> kube-proxy [347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 17:58:05.534485       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 17:58:05.568766       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	E1008 17:58:05.568940       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 17:58:05.609153       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 17:58:05.609181       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 17:58:05.609201       1 server_linux.go:169] "Using iptables Proxier"
	I1008 17:58:05.612762       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 17:58:05.613968       1 server.go:483] "Version info" version="v1.31.1"
	I1008 17:58:05.614042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 17:58:05.616792       1 config.go:199] "Starting service config controller"
	I1008 17:58:05.617139       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 17:58:05.617374       1 config.go:105] "Starting endpoint slice config controller"
	I1008 17:58:05.617451       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 17:58:05.618851       1 config.go:328] "Starting node config controller"
	I1008 17:58:05.619090       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 17:58:05.718484       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 17:58:05.718497       1 shared_informer.go:320] Caches are synced for service config
	I1008 17:58:05.720100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20] <==
	E1008 18:00:30.199446       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rzflt" node="ha-094095-m03"
	E1008 18:00:30.199562       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e0ead4a-bdd7-4fe2-8070-a2e4680f7988(default/busybox-7dff88458-rzflt) was assumed on ha-094095-m03 but assigned to ha-094095-m02" pod="default/busybox-7dff88458-rzflt"
	E1008 18:00:30.201601       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-rzflt"
	I1008 18:00:30.201672       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rzflt" node="ha-094095-m02"
	E1008 18:00:30.241278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.243855       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 00074fc5-40f9-403b-9cec-3f333b177d47(default/busybox-7dff88458-2hz9n) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2hz9n"
	E1008 18:00:30.248134       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-2hz9n"
	I1008 18:00:30.248955       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.302814       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.303201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 399813b8-6199-4631-af76-66e7e8bf4b8c(default/busybox-7dff88458-rxwcg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rxwcg"
	E1008 18:00:30.303327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" pod="default/busybox-7dff88458-rxwcg"
	I1008 18:00:30.303461       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.454050       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-l6wvv\" not found" pod="default/busybox-7dff88458-l6wvv"
	E1008 18:01:09.806729       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.806888       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c9b872af-5075-4c26-99cf-282b077912ee(kube-system/kube-proxy-jjgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jjgsh"
	E1008 18:01:09.806916       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-jjgsh"
	I1008 18:01:09.806962       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.807512       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.807581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2f9978f0-fb58-41fb-ac79-c07ec22f8b12(kube-system/kindnet-jhqlp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jhqlp"
	E1008 18:01:09.807603       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" pod="kube-system/kindnet-jhqlp"
	I1008 18:01:09.807627       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.868191       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	E1008 18:01:09.869875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6257090e-676b-45ea-9261-104b1ba829f3(kube-system/kube-proxy-x5wf6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-x5wf6"
	E1008 18:01:09.871281       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-x5wf6"
	I1008 18:01:09.871556       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	
	
	==> kubelet <==
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:03:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293753    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293782    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295059    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295735    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297939    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297984    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300086    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300349    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302156    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302530    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304820    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304911    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.254307    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307018    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307069    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:10 ha-094095 kubelet[1309]: E1008 18:04:10.309307    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410650308966284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:10 ha-094095 kubelet[1309]: E1008 18:04:10.309339    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410650308966284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:20 ha-094095 kubelet[1309]: E1008 18:04:20.311278    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660310643006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:20 ha-094095 kubelet[1309]: E1008 18:04:20.311350    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660310643006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1008 18:04:22.755442  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.123483551s)
ha_test.go:309: expected profile "ha-094095" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-094095\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-094095\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-094095\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.99\",\"Port\":8443,\"Kubernet
esVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.65\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.194\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.33\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"meta
llb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262
144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (1.263106356s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m03_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-094095 node start m02 -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:57:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:57:18.946903  548894 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:57:18.947145  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947153  548894 out.go:358] Setting ErrFile to fd 2...
	I1008 17:57:18.947157  548894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:57:18.947344  548894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:57:18.947912  548894 out.go:352] Setting JSON to false
	I1008 17:57:18.948876  548894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5991,"bootTime":1728404248,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:57:18.948933  548894 start.go:139] virtualization: kvm guest
	I1008 17:57:18.950969  548894 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:57:18.952033  548894 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:57:18.952082  548894 notify.go:220] Checking for updates...
	I1008 17:57:18.954369  548894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:57:18.955681  548894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:57:18.956842  548894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:18.957830  548894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:57:18.959069  548894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:57:18.960234  548894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:57:18.994761  548894 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 17:57:18.995800  548894 start.go:297] selected driver: kvm2
	I1008 17:57:18.995813  548894 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:57:18.995824  548894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:57:18.996586  548894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:18.996660  548894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:57:19.011273  548894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:57:19.011313  548894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:57:19.011548  548894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:57:19.011585  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:19.011625  548894 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 17:57:19.011636  548894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 17:57:19.011687  548894 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:19.011804  548894 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:57:19.013449  548894 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 17:57:19.014789  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:19.014817  548894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 17:57:19.014826  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:57:19.014907  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:57:19.014919  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:57:19.015263  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:19.015288  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json: {Name:mk4a4bbfc5e4991434a64e3c2f362f3acde8e751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:19.015419  548894 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:57:19.015446  548894 start.go:364] duration metric: took 15.142µs to acquireMachinesLock for "ha-094095"
	I1008 17:57:19.015463  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:57:19.015507  548894 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 17:57:19.017014  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:57:19.017133  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:57:19.017171  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:57:19.031391  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I1008 17:57:19.031835  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:57:19.032448  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:57:19.032468  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:57:19.032843  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:57:19.033048  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:19.033189  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:19.033336  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:57:19.033367  548894 client.go:168] LocalClient.Create starting
	I1008 17:57:19.033396  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:57:19.033427  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033446  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033499  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:57:19.033517  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:57:19.033530  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:57:19.033545  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:57:19.033558  548894 main.go:141] libmachine: (ha-094095) Calling .PreCreateCheck
	I1008 17:57:19.033903  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:19.034253  548894 main.go:141] libmachine: Creating machine...
	I1008 17:57:19.034267  548894 main.go:141] libmachine: (ha-094095) Calling .Create
	I1008 17:57:19.034420  548894 main.go:141] libmachine: (ha-094095) Creating KVM machine...
	I1008 17:57:19.035565  548894 main.go:141] libmachine: (ha-094095) DBG | found existing default KVM network
	I1008 17:57:19.036249  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.036120  548918 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1008 17:57:19.036283  548894 main.go:141] libmachine: (ha-094095) DBG | created network xml: 
	I1008 17:57:19.036302  548894 main.go:141] libmachine: (ha-094095) DBG | <network>
	I1008 17:57:19.036314  548894 main.go:141] libmachine: (ha-094095) DBG |   <name>mk-ha-094095</name>
	I1008 17:57:19.036323  548894 main.go:141] libmachine: (ha-094095) DBG |   <dns enable='no'/>
	I1008 17:57:19.036331  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036342  548894 main.go:141] libmachine: (ha-094095) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 17:57:19.036349  548894 main.go:141] libmachine: (ha-094095) DBG |     <dhcp>
	I1008 17:57:19.036361  548894 main.go:141] libmachine: (ha-094095) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 17:57:19.036370  548894 main.go:141] libmachine: (ha-094095) DBG |     </dhcp>
	I1008 17:57:19.036386  548894 main.go:141] libmachine: (ha-094095) DBG |   </ip>
	I1008 17:57:19.036427  548894 main.go:141] libmachine: (ha-094095) DBG |   
	I1008 17:57:19.036447  548894 main.go:141] libmachine: (ha-094095) DBG | </network>
	I1008 17:57:19.036455  548894 main.go:141] libmachine: (ha-094095) DBG | 
	I1008 17:57:19.041263  548894 main.go:141] libmachine: (ha-094095) DBG | trying to create private KVM network mk-ha-094095 192.168.39.0/24...
	I1008 17:57:19.105180  548894 main.go:141] libmachine: (ha-094095) DBG | private KVM network mk-ha-094095 192.168.39.0/24 created
	I1008 17:57:19.105208  548894 main.go:141] libmachine: (ha-094095) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.105220  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.105167  548918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.105237  548894 main.go:141] libmachine: (ha-094095) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:57:19.105263  548894 main.go:141] libmachine: (ha-094095) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:57:19.385345  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.385226  548918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa...
	I1008 17:57:19.617977  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617838  548918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk...
	I1008 17:57:19.618008  548894 main.go:141] libmachine: (ha-094095) DBG | Writing magic tar header
	I1008 17:57:19.618021  548894 main.go:141] libmachine: (ha-094095) DBG | Writing SSH key tar header
	I1008 17:57:19.618031  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:19.617973  548918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 ...
	I1008 17:57:19.618141  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095
	I1008 17:57:19.618165  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095 (perms=drwx------)
	I1008 17:57:19.618171  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:57:19.618178  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:57:19.618187  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:57:19.618193  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:57:19.618199  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:57:19.618206  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:57:19.618211  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:57:19.618216  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:57:19.618224  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:57:19.618231  548894 main.go:141] libmachine: (ha-094095) DBG | Checking permissions on dir: /home
	I1008 17:57:19.618238  548894 main.go:141] libmachine: (ha-094095) DBG | Skipping /home - not owner
	I1008 17:57:19.618249  548894 main.go:141] libmachine: (ha-094095) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:57:19.618261  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:19.619347  548894 main.go:141] libmachine: (ha-094095) define libvirt domain using xml: 
	I1008 17:57:19.619369  548894 main.go:141] libmachine: (ha-094095) <domain type='kvm'>
	I1008 17:57:19.619378  548894 main.go:141] libmachine: (ha-094095)   <name>ha-094095</name>
	I1008 17:57:19.619388  548894 main.go:141] libmachine: (ha-094095)   <memory unit='MiB'>2200</memory>
	I1008 17:57:19.619396  548894 main.go:141] libmachine: (ha-094095)   <vcpu>2</vcpu>
	I1008 17:57:19.619402  548894 main.go:141] libmachine: (ha-094095)   <features>
	I1008 17:57:19.619410  548894 main.go:141] libmachine: (ha-094095)     <acpi/>
	I1008 17:57:19.619420  548894 main.go:141] libmachine: (ha-094095)     <apic/>
	I1008 17:57:19.619427  548894 main.go:141] libmachine: (ha-094095)     <pae/>
	I1008 17:57:19.619444  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619470  548894 main.go:141] libmachine: (ha-094095)   </features>
	I1008 17:57:19.619484  548894 main.go:141] libmachine: (ha-094095)   <cpu mode='host-passthrough'>
	I1008 17:57:19.619491  548894 main.go:141] libmachine: (ha-094095)   
	I1008 17:57:19.619500  548894 main.go:141] libmachine: (ha-094095)   </cpu>
	I1008 17:57:19.619506  548894 main.go:141] libmachine: (ha-094095)   <os>
	I1008 17:57:19.619515  548894 main.go:141] libmachine: (ha-094095)     <type>hvm</type>
	I1008 17:57:19.619527  548894 main.go:141] libmachine: (ha-094095)     <boot dev='cdrom'/>
	I1008 17:57:19.619536  548894 main.go:141] libmachine: (ha-094095)     <boot dev='hd'/>
	I1008 17:57:19.619547  548894 main.go:141] libmachine: (ha-094095)     <bootmenu enable='no'/>
	I1008 17:57:19.619559  548894 main.go:141] libmachine: (ha-094095)   </os>
	I1008 17:57:19.619569  548894 main.go:141] libmachine: (ha-094095)   <devices>
	I1008 17:57:19.619578  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='cdrom'>
	I1008 17:57:19.619590  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/boot2docker.iso'/>
	I1008 17:57:19.619601  548894 main.go:141] libmachine: (ha-094095)       <target dev='hdc' bus='scsi'/>
	I1008 17:57:19.619612  548894 main.go:141] libmachine: (ha-094095)       <readonly/>
	I1008 17:57:19.619621  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619648  548894 main.go:141] libmachine: (ha-094095)     <disk type='file' device='disk'>
	I1008 17:57:19.619669  548894 main.go:141] libmachine: (ha-094095)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:57:19.619678  548894 main.go:141] libmachine: (ha-094095)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/ha-094095.rawdisk'/>
	I1008 17:57:19.619688  548894 main.go:141] libmachine: (ha-094095)       <target dev='hda' bus='virtio'/>
	I1008 17:57:19.619694  548894 main.go:141] libmachine: (ha-094095)     </disk>
	I1008 17:57:19.619711  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619719  548894 main.go:141] libmachine: (ha-094095)       <source network='mk-ha-094095'/>
	I1008 17:57:19.619724  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619731  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619735  548894 main.go:141] libmachine: (ha-094095)     <interface type='network'>
	I1008 17:57:19.619743  548894 main.go:141] libmachine: (ha-094095)       <source network='default'/>
	I1008 17:57:19.619747  548894 main.go:141] libmachine: (ha-094095)       <model type='virtio'/>
	I1008 17:57:19.619752  548894 main.go:141] libmachine: (ha-094095)     </interface>
	I1008 17:57:19.619756  548894 main.go:141] libmachine: (ha-094095)     <serial type='pty'>
	I1008 17:57:19.619763  548894 main.go:141] libmachine: (ha-094095)       <target port='0'/>
	I1008 17:57:19.619769  548894 main.go:141] libmachine: (ha-094095)     </serial>
	I1008 17:57:19.619798  548894 main.go:141] libmachine: (ha-094095)     <console type='pty'>
	I1008 17:57:19.619831  548894 main.go:141] libmachine: (ha-094095)       <target type='serial' port='0'/>
	I1008 17:57:19.619844  548894 main.go:141] libmachine: (ha-094095)     </console>
	I1008 17:57:19.619859  548894 main.go:141] libmachine: (ha-094095)     <rng model='virtio'>
	I1008 17:57:19.619885  548894 main.go:141] libmachine: (ha-094095)       <backend model='random'>/dev/random</backend>
	I1008 17:57:19.619895  548894 main.go:141] libmachine: (ha-094095)     </rng>
	I1008 17:57:19.619903  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619912  548894 main.go:141] libmachine: (ha-094095)     
	I1008 17:57:19.619921  548894 main.go:141] libmachine: (ha-094095)   </devices>
	I1008 17:57:19.619930  548894 main.go:141] libmachine: (ha-094095) </domain>
	I1008 17:57:19.619943  548894 main.go:141] libmachine: (ha-094095) 
	I1008 17:57:19.623957  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:c2:1c:c1 in network default
	I1008 17:57:19.624533  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:19.624567  548894 main.go:141] libmachine: (ha-094095) Ensuring networks are active...
	I1008 17:57:19.625167  548894 main.go:141] libmachine: (ha-094095) Ensuring network default is active
	I1008 17:57:19.625513  548894 main.go:141] libmachine: (ha-094095) Ensuring network mk-ha-094095 is active
	I1008 17:57:19.626008  548894 main.go:141] libmachine: (ha-094095) Getting domain xml...
	I1008 17:57:19.626619  548894 main.go:141] libmachine: (ha-094095) Creating domain...
	I1008 17:57:20.795900  548894 main.go:141] libmachine: (ha-094095) Waiting to get IP...
	I1008 17:57:20.796661  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:20.797068  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:20.797096  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:20.797046  548918 retry.go:31] will retry after 205.911312ms: waiting for machine to come up
	I1008 17:57:21.004526  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.004999  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.005029  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.004943  548918 retry.go:31] will retry after 273.425618ms: waiting for machine to come up
	I1008 17:57:21.280506  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.280861  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.280894  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.280804  548918 retry.go:31] will retry after 435.479274ms: waiting for machine to come up
	I1008 17:57:21.717289  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:21.717636  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:21.717662  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:21.717595  548918 retry.go:31] will retry after 576.307625ms: waiting for machine to come up
	I1008 17:57:22.295076  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.295499  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.295527  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.295461  548918 retry.go:31] will retry after 636.373654ms: waiting for machine to come up
	I1008 17:57:22.933047  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:22.933364  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:22.933391  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:22.933317  548918 retry.go:31] will retry after 741.414571ms: waiting for machine to come up
	I1008 17:57:23.676038  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:23.676368  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:23.676441  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:23.676362  548918 retry.go:31] will retry after 726.748749ms: waiting for machine to come up
	I1008 17:57:24.404401  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:24.404771  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:24.404801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:24.404726  548918 retry.go:31] will retry after 1.449573768s: waiting for machine to come up
	I1008 17:57:25.856490  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:25.856930  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:25.856961  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:25.856877  548918 retry.go:31] will retry after 1.340937339s: waiting for machine to come up
	I1008 17:57:27.199433  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:27.199826  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:27.199863  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:27.199804  548918 retry.go:31] will retry after 1.798441674s: waiting for machine to come up
	I1008 17:57:28.999424  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:28.999921  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:28.999945  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:28.999873  548918 retry.go:31] will retry after 1.937304185s: waiting for machine to come up
	I1008 17:57:30.939309  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:30.939791  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:30.939819  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:30.939738  548918 retry.go:31] will retry after 3.500432638s: waiting for machine to come up
	I1008 17:57:34.441923  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:34.442356  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:34.442385  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:34.442290  548918 retry.go:31] will retry after 3.09089187s: waiting for machine to come up
	I1008 17:57:37.536439  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:37.536781  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find current IP address of domain ha-094095 in network mk-ha-094095
	I1008 17:57:37.536801  548894 main.go:141] libmachine: (ha-094095) DBG | I1008 17:57:37.536736  548918 retry.go:31] will retry after 5.395822577s: waiting for machine to come up
	I1008 17:57:42.937057  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937477  548894 main.go:141] libmachine: (ha-094095) Found IP for machine: 192.168.39.99
	I1008 17:57:42.937503  548894 main.go:141] libmachine: (ha-094095) Reserving static IP address...
	I1008 17:57:42.937532  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has current primary IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:42.937886  548894 main.go:141] libmachine: (ha-094095) DBG | unable to find host DHCP lease matching {name: "ha-094095", mac: "52:54:00:bf:fa:3a", ip: "192.168.39.99"} in network mk-ha-094095
	I1008 17:57:43.006083  548894 main.go:141] libmachine: (ha-094095) DBG | Getting to WaitForSSH function...
	I1008 17:57:43.006114  548894 main.go:141] libmachine: (ha-094095) Reserved static IP address: 192.168.39.99
	I1008 17:57:43.006128  548894 main.go:141] libmachine: (ha-094095) Waiting for SSH to be available...
	I1008 17:57:43.008468  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.008879  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.008907  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.009020  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH client type: external
	I1008 17:57:43.009041  548894 main.go:141] libmachine: (ha-094095) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa (-rw-------)
	I1008 17:57:43.009062  548894 main.go:141] libmachine: (ha-094095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:57:43.009119  548894 main.go:141] libmachine: (ha-094095) DBG | About to run SSH command:
	I1008 17:57:43.009138  548894 main.go:141] libmachine: (ha-094095) DBG | exit 0
	I1008 17:57:43.130112  548894 main.go:141] libmachine: (ha-094095) DBG | SSH cmd err, output: <nil>: 
	I1008 17:57:43.130367  548894 main.go:141] libmachine: (ha-094095) KVM machine creation complete!
	I1008 17:57:43.130653  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:43.131203  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131384  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:43.131553  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:57:43.131567  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:57:43.132696  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:57:43.132710  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:57:43.132718  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:57:43.132724  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.134855  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135157  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.135186  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.135341  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.135500  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135635  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.135753  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.135900  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.136116  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.136132  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:57:43.237532  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.237562  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:57:43.237573  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.240102  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240361  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.240386  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.240541  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.240728  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.240888  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.241033  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.241194  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.241372  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.241387  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:57:43.342754  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:57:43.342848  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:57:43.342862  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:57:43.342875  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343129  548894 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 17:57:43.343169  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.343355  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.345781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346150  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.346172  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.346401  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.346572  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346747  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.346898  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.347071  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.347247  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.347259  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 17:57:43.463654  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 17:57:43.463696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.466255  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466646  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.466682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.466840  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.467010  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467143  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.467243  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.467378  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.467581  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.467603  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:57:43.579438  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:57:43.579474  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:57:43.579515  548894 buildroot.go:174] setting up certificates
	I1008 17:57:43.579525  548894 provision.go:84] configureAuth start
	I1008 17:57:43.579536  548894 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 17:57:43.579814  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:43.582136  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582503  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.582528  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.582696  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.584820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585187  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.585207  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.585310  548894 provision.go:143] copyHostCerts
	I1008 17:57:43.585352  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585401  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:57:43.585412  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:57:43.585494  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:57:43.585624  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585659  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:57:43.585677  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:57:43.585716  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:57:43.585797  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585818  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:57:43.585827  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:57:43.585862  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:57:43.585945  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 17:57:43.673469  548894 provision.go:177] copyRemoteCerts
	I1008 17:57:43.673538  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:57:43.673570  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.676617  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.676907  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.676942  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.677124  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.677287  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.677489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.677596  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:43.759344  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:57:43.759416  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 17:57:43.781917  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:57:43.781981  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:57:43.804256  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:57:43.804312  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:57:43.826921  548894 provision.go:87] duration metric: took 247.384803ms to configureAuth
	I1008 17:57:43.826944  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:57:43.827107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:57:43.827185  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:43.830340  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830654  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:43.830685  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:43.830917  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:43.831091  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831234  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:43.831362  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:43.831590  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:43.831761  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:43.831775  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:57:44.043562  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:57:44.043593  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:57:44.043602  548894 main.go:141] libmachine: (ha-094095) Calling .GetURL
	I1008 17:57:44.044870  548894 main.go:141] libmachine: (ha-094095) DBG | Using libvirt version 6000000
	I1008 17:57:44.047119  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047449  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.047478  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.047637  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:57:44.047652  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:57:44.047661  548894 client.go:171] duration metric: took 25.014282218s to LocalClient.Create
	I1008 17:57:44.047690  548894 start.go:167] duration metric: took 25.014354001s to libmachine.API.Create "ha-094095"
	I1008 17:57:44.047702  548894 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 17:57:44.047716  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:57:44.047739  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.048014  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:57:44.048045  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.050022  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050306  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.050347  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.050505  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.050666  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.050837  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.050949  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.132504  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:57:44.136621  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:57:44.136645  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:57:44.136713  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:57:44.136806  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:57:44.136818  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:57:44.136924  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:57:44.146103  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:44.168356  548894 start.go:296] duration metric: took 120.640584ms for postStartSetup
	I1008 17:57:44.168411  548894 main.go:141] libmachine: (ha-094095) Calling .GetConfigRaw
	I1008 17:57:44.169087  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.172425  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.172799  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.172823  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.173056  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:57:44.173256  548894 start.go:128] duration metric: took 25.157738621s to createHost
	I1008 17:57:44.173281  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.175394  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175685  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.175711  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.175872  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.176022  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176162  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.176257  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.176381  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:57:44.176571  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 17:57:44.176587  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:57:44.278668  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410264.248509692
	
	I1008 17:57:44.278691  548894 fix.go:216] guest clock: 1728410264.248509692
	I1008 17:57:44.278710  548894 fix.go:229] Guest: 2024-10-08 17:57:44.248509692 +0000 UTC Remote: 2024-10-08 17:57:44.173269639 +0000 UTC m=+25.264229848 (delta=75.240053ms)
	I1008 17:57:44.278730  548894 fix.go:200] guest clock delta is within tolerance: 75.240053ms
	I1008 17:57:44.278735  548894 start.go:83] releasing machines lock for "ha-094095", held for 25.26328044s
	I1008 17:57:44.278761  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.279011  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:44.281403  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281704  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.281728  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.281844  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282331  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282492  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:57:44.282608  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:57:44.282649  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.282695  548894 ssh_runner.go:195] Run: cat /version.json
	I1008 17:57:44.282718  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:57:44.285197  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285467  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285561  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285596  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.285720  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.285878  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.285947  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:44.285972  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:44.286009  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286152  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.286166  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:57:44.286407  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:57:44.286555  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:57:44.286685  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:57:44.362923  548894 ssh_runner.go:195] Run: systemctl --version
	I1008 17:57:44.382917  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:57:44.543848  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:57:44.549734  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:57:44.549799  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:57:44.566434  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:57:44.566456  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:57:44.566531  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:57:44.582382  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:57:44.595796  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:57:44.595845  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:57:44.608932  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:57:44.621723  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:57:44.737514  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:57:44.894846  548894 docker.go:233] disabling docker service ...
	I1008 17:57:44.894913  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:57:44.908802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:57:44.920944  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:57:45.040515  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:57:45.156709  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:57:45.170339  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:57:45.188088  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:57:45.188162  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.197887  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:57:45.197965  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.207765  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.217192  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.226820  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:57:45.236401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.246021  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.261908  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:57:45.271409  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:57:45.280221  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:57:45.280279  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:57:45.293099  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:57:45.301781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:45.406440  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:57:45.492188  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:57:45.492292  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:57:45.496696  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:57:45.496749  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:57:45.500380  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:57:45.538828  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:57:45.538916  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.566412  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:57:45.594012  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:57:45.595183  548894 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 17:57:45.597820  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598135  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:57:45.598169  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:57:45.598406  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:57:45.602368  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:45.614968  548894 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 17:57:45.615076  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:57:45.615144  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:45.645417  548894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 17:57:45.645488  548894 ssh_runner.go:195] Run: which lz4
	I1008 17:57:45.649242  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1008 17:57:45.649331  548894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 17:57:45.653358  548894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 17:57:45.653398  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 17:57:46.900415  548894 crio.go:462] duration metric: took 1.251111162s to copy over tarball
	I1008 17:57:46.900502  548894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 17:57:48.824951  548894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.92441022s)
	I1008 17:57:48.824989  548894 crio.go:469] duration metric: took 1.924546326s to extract the tarball
	I1008 17:57:48.825000  548894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 17:57:48.862916  548894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 17:57:48.914586  548894 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 17:57:48.914611  548894 cache_images.go:84] Images are preloaded, skipping loading
	I1008 17:57:48.914620  548894 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 17:57:48.914713  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:57:48.914782  548894 ssh_runner.go:195] Run: crio config
	I1008 17:57:48.965231  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:57:48.965254  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:57:48.965272  548894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 17:57:48.965293  548894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 17:57:48.965430  548894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 17:57:48.965457  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:57:48.965957  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:57:48.984862  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:57:48.984960  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:57:48.985020  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:57:48.994069  548894 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 17:57:48.994134  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 17:57:49.003013  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 17:57:49.018952  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:57:49.034270  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 17:57:49.049856  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1008 17:57:49.065212  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:57:49.068890  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:57:49.080238  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:57:49.207273  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:57:49.224685  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 17:57:49.224709  548894 certs.go:194] generating shared ca certs ...
	I1008 17:57:49.224731  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.224901  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:57:49.224958  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:57:49.224972  548894 certs.go:256] generating profile certs ...
	I1008 17:57:49.225044  548894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:57:49.225073  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt with IP's: []
	I1008 17:57:49.321305  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt ...
	I1008 17:57:49.321342  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt: {Name:mkc9007ec871f6b1b480e3b611a05707a64a5848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321530  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key ...
	I1008 17:57:49.321546  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key: {Name:mke9b241dc151acd2e67df3e03efa92395ed4dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.321647  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc
	I1008 17:57:49.321666  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.254]
	I1008 17:57:49.615476  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc ...
	I1008 17:57:49.615508  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc: {Name:mk28ddc8f9cdc62c03babb0aa78423717078ec15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615696  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc ...
	I1008 17:57:49.615715  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc: {Name:mk7165300ee0dd42df7c546caae76a339625e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.615817  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:57:49.615941  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.23576ebc -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:57:49.616029  548894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:57:49.616053  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt with IP's: []
	I1008 17:57:49.700382  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt ...
	I1008 17:57:49.700415  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt: {Name:mk23273db76b4a6b0f12257e27a1a06fa6830ea6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700587  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key ...
	I1008 17:57:49.700602  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key: {Name:mk0eecaa249eaee41f1ee6112c7eb1585a4e7c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:57:49.700724  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:57:49.700753  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:57:49.700768  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:57:49.700784  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:57:49.700811  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:57:49.700836  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:57:49.700855  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:57:49.700874  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:57:49.700934  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:57:49.700987  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:57:49.701002  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:57:49.701037  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:57:49.701072  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:57:49.701103  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:57:49.701155  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:57:49.701193  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:49.701232  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:57:49.701259  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:57:49.701875  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:57:49.727666  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:57:49.750886  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:57:49.773442  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:57:49.797562  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 17:57:49.820463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:57:49.843011  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:57:49.866615  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:57:49.889741  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:57:49.912810  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:57:49.936333  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:57:49.960454  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 17:57:49.979469  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:57:49.985669  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:57:49.997465  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003200  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.003257  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:57:50.009543  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:57:50.024695  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:57:50.038764  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044608  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.044730  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:57:50.050835  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:57:50.061168  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:57:50.071347  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075705  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.075749  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:57:50.081172  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:57:50.091550  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:57:50.095476  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:57:50.095534  548894 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:57:50.095625  548894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 17:57:50.095693  548894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 17:57:50.141057  548894 cri.go:89] found id: ""
	I1008 17:57:50.141128  548894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 17:57:50.155661  548894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 17:57:50.164965  548894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 17:57:50.174132  548894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 17:57:50.174150  548894 kubeadm.go:157] found existing configuration files:
	
	I1008 17:57:50.174193  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 17:57:50.182760  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 17:57:50.182801  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 17:57:50.191921  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 17:57:50.200321  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 17:57:50.200379  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 17:57:50.209419  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.217728  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 17:57:50.217774  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 17:57:50.226543  548894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 17:57:50.234817  548894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 17:57:50.234864  548894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 17:57:50.243553  548894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 17:57:50.351407  548894 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 17:57:50.351505  548894 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 17:57:50.448058  548894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 17:57:50.448219  548894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 17:57:50.448390  548894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 17:57:50.458228  548894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 17:57:50.561945  548894 out.go:235]   - Generating certificates and keys ...
	I1008 17:57:50.562071  548894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 17:57:50.562160  548894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 17:57:50.581396  548894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 17:57:50.643567  548894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 17:57:50.777590  548894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 17:57:50.908209  548894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 17:57:51.030015  548894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 17:57:51.030180  548894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.147196  548894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 17:57:51.147407  548894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-094095 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I1008 17:57:51.301954  548894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 17:57:51.401522  548894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 17:57:51.537212  548894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 17:57:51.537477  548894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 17:57:51.996984  548894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 17:57:52.232782  548894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 17:57:52.360403  548894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 17:57:52.550793  548894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 17:57:52.645896  548894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 17:57:52.646431  548894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 17:57:52.649705  548894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 17:57:52.693095  548894 out.go:235]   - Booting up control plane ...
	I1008 17:57:52.693231  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 17:57:52.693301  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 17:57:52.693399  548894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 17:57:52.693595  548894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 17:57:52.693726  548894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 17:57:52.693765  548894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 17:57:52.808206  548894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 17:57:52.808366  548894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 17:57:53.309429  548894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.545044ms
	I1008 17:57:53.309511  548894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 17:57:59.231916  548894 kubeadm.go:310] [api-check] The API server is healthy after 5.925563733s
	I1008 17:57:59.243298  548894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 17:57:59.259662  548894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 17:57:59.788254  548894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 17:57:59.788485  548894 kubeadm.go:310] [mark-control-plane] Marking the node ha-094095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 17:57:59.797286  548894 kubeadm.go:310] [bootstrap-token] Using token: 3mfy3k.85hms8dtl8svlvkm
	I1008 17:57:59.798387  548894 out.go:235]   - Configuring RBAC rules ...
	I1008 17:57:59.798518  548894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 17:57:59.805485  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 17:57:59.816460  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 17:57:59.820883  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 17:57:59.823643  548894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 17:57:59.826562  548894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 17:57:59.838159  548894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 17:58:00.096325  548894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 17:58:00.637130  548894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 17:58:00.638100  548894 kubeadm.go:310] 
	I1008 17:58:00.638187  548894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 17:58:00.638198  548894 kubeadm.go:310] 
	I1008 17:58:00.638289  548894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 17:58:00.638337  548894 kubeadm.go:310] 
	I1008 17:58:00.638388  548894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 17:58:00.638476  548894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 17:58:00.638558  548894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 17:58:00.638573  548894 kubeadm.go:310] 
	I1008 17:58:00.638644  548894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 17:58:00.638654  548894 kubeadm.go:310] 
	I1008 17:58:00.638715  548894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 17:58:00.638725  548894 kubeadm.go:310] 
	I1008 17:58:00.638784  548894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 17:58:00.638864  548894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 17:58:00.638920  548894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 17:58:00.638927  548894 kubeadm.go:310] 
	I1008 17:58:00.638996  548894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 17:58:00.639061  548894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 17:58:00.639067  548894 kubeadm.go:310] 
	I1008 17:58:00.639138  548894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639257  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 17:58:00.639298  548894 kubeadm.go:310] 	--control-plane 
	I1008 17:58:00.639308  548894 kubeadm.go:310] 
	I1008 17:58:00.639444  548894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 17:58:00.639453  548894 kubeadm.go:310] 
	I1008 17:58:00.639547  548894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mfy3k.85hms8dtl8svlvkm \
	I1008 17:58:00.639692  548894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 17:58:00.640765  548894 kubeadm.go:310] W1008 17:57:50.322627     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.640999  548894 kubeadm.go:310] W1008 17:57:50.323512     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 17:58:00.641121  548894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 17:58:00.641159  548894 cni.go:84] Creating CNI manager for ""
	I1008 17:58:00.641169  548894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 17:58:00.643434  548894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1008 17:58:00.644444  548894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 17:58:00.650209  548894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1008 17:58:00.650224  548894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 17:58:00.677687  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 17:58:01.011782  548894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 17:58:01.011872  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.011918  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095 minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=true
	I1008 17:58:01.050127  548894 ops.go:34] apiserver oom_adj: -16
	I1008 17:58:01.121355  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:01.622435  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.121789  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:02.621637  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.121512  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:03.621993  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.121641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.621728  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 17:58:04.753917  548894 kubeadm.go:1113] duration metric: took 3.742110374s to wait for elevateKubeSystemPrivileges
	I1008 17:58:04.753962  548894 kubeadm.go:394] duration metric: took 14.658436547s to StartCluster
	I1008 17:58:04.753985  548894 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.754071  548894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.755006  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:04.755245  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 17:58:04.755258  548894 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:04.755285  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:58:04.755305  548894 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 17:58:04.755395  548894 addons.go:69] Setting storage-provisioner=true in profile "ha-094095"
	I1008 17:58:04.755421  548894 addons.go:234] Setting addon storage-provisioner=true in "ha-094095"
	I1008 17:58:04.755423  548894 addons.go:69] Setting default-storageclass=true in profile "ha-094095"
	I1008 17:58:04.755450  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.755463  548894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-094095"
	I1008 17:58:04.755954  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:04.756015  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756060  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.756153  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.756178  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.771314  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I1008 17:58:04.771411  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1008 17:58:04.771715  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.771845  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.772259  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772280  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772399  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.772421  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.772677  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772761  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.772921  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.773166  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.773207  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.775127  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:04.775464  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 17:58:04.776098  548894 cert_rotation.go:140] Starting client certificate rotation controller
	I1008 17:58:04.776464  548894 addons.go:234] Setting addon default-storageclass=true in "ha-094095"
	I1008 17:58:04.776513  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:04.776901  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.776950  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.788872  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I1008 17:58:04.789408  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.789954  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.789982  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.790391  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.790585  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.791166  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1008 17:58:04.791602  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.792075  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.792102  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.792300  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.792437  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.792883  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:04.792921  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:04.794070  548894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 17:58:04.795292  548894 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:04.795314  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 17:58:04.795333  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.798275  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798778  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.798817  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.798959  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.799152  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.799319  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.799447  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.807217  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1008 17:58:04.807681  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:04.808084  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:04.808108  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:04.808466  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:04.808664  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:04.810084  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:04.810282  548894 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:04.810305  548894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 17:58:04.810351  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:04.813002  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813401  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:04.813426  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:04.813628  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:04.813798  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:04.813951  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:04.814091  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:04.894935  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 17:58:04.989822  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 17:58:05.005242  548894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 17:58:05.480020  548894 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 17:58:05.749086  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749116  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749148  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749170  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749410  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749425  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749434  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749440  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749521  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.749536  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749550  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.749557  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.749608  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749908  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.749943  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750036  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.749970  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.750103  548894 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 17:58:05.749988  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.750114  548894 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 17:58:05.750160  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.750219  548894 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1008 17:58:05.750231  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.750241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.750250  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.762332  548894 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1008 17:58:05.763152  548894 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1008 17:58:05.763172  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:05.763185  548894 round_trippers.go:473]     Content-Type: application/json
	I1008 17:58:05.763193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:05.763197  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:05.765314  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:58:05.765554  548894 main.go:141] libmachine: Making call to close driver server
	I1008 17:58:05.765571  548894 main.go:141] libmachine: (ha-094095) Calling .Close
	I1008 17:58:05.765856  548894 main.go:141] libmachine: Successfully made call to close driver server
	I1008 17:58:05.765872  548894 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 17:58:05.765886  548894 main.go:141] libmachine: (ha-094095) DBG | Closing plugin on server side
	I1008 17:58:05.768201  548894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1008 17:58:05.769166  548894 addons.go:510] duration metric: took 1.013864152s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 17:58:05.769206  548894 start.go:246] waiting for cluster config update ...
	I1008 17:58:05.769221  548894 start.go:255] writing updated cluster config ...
	I1008 17:58:05.770624  548894 out.go:201] 
	I1008 17:58:05.771889  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:05.771979  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.773435  548894 out.go:177] * Starting "ha-094095-m02" control-plane node in "ha-094095" cluster
	I1008 17:58:05.774389  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:58:05.774416  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:58:05.774517  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:58:05.774543  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:58:05.774635  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:05.774827  548894 start.go:360] acquireMachinesLock for ha-094095-m02: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:58:05.774885  548894 start.go:364] duration metric: took 34.657µs to acquireMachinesLock for "ha-094095-m02"
	I1008 17:58:05.774908  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:05.775005  548894 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1008 17:58:05.776351  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:58:05.776440  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:05.776482  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:05.791492  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I1008 17:58:05.791992  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:05.792464  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:05.792487  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:05.792786  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:05.792949  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:05.793054  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:05.793160  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:58:05.793192  548894 client.go:168] LocalClient.Create starting
	I1008 17:58:05.793230  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:58:05.793268  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793289  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793356  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:58:05.793382  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:58:05.793399  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:58:05.793425  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:58:05.793436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .PreCreateCheck
	I1008 17:58:05.793636  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:05.793961  548894 main.go:141] libmachine: Creating machine...
	I1008 17:58:05.793974  548894 main.go:141] libmachine: (ha-094095-m02) Calling .Create
	I1008 17:58:05.794087  548894 main.go:141] libmachine: (ha-094095-m02) Creating KVM machine...
	I1008 17:58:05.795174  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing default KVM network
	I1008 17:58:05.795373  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found existing private KVM network mk-ha-094095
	I1008 17:58:05.795488  548894 main.go:141] libmachine: (ha-094095-m02) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:05.795518  548894 main.go:141] libmachine: (ha-094095-m02) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:58:05.795590  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:05.795498  549282 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:05.795693  548894 main.go:141] libmachine: (ha-094095-m02) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:58:06.080254  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.080126  549282 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa...
	I1008 17:58:06.408665  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408546  549282 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk...
	I1008 17:58:06.408701  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing magic tar header
	I1008 17:58:06.408716  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Writing SSH key tar header
	I1008 17:58:06.408729  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:06.408669  549282 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 ...
	I1008 17:58:06.408798  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02
	I1008 17:58:06.408863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:58:06.408916  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02 (perms=drwx------)
	I1008 17:58:06.408935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:58:06.408946  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:58:06.408954  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:58:06.408966  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:58:06.408972  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Checking permissions on dir: /home
	I1008 17:58:06.408988  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Skipping /home - not owner
	I1008 17:58:06.409003  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:58:06.409013  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:58:06.409022  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:58:06.409038  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:58:06.409050  548894 main.go:141] libmachine: (ha-094095-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:58:06.409060  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:06.410262  548894 main.go:141] libmachine: (ha-094095-m02) define libvirt domain using xml: 
	I1008 17:58:06.410280  548894 main.go:141] libmachine: (ha-094095-m02) <domain type='kvm'>
	I1008 17:58:06.410300  548894 main.go:141] libmachine: (ha-094095-m02)   <name>ha-094095-m02</name>
	I1008 17:58:06.410310  548894 main.go:141] libmachine: (ha-094095-m02)   <memory unit='MiB'>2200</memory>
	I1008 17:58:06.410330  548894 main.go:141] libmachine: (ha-094095-m02)   <vcpu>2</vcpu>
	I1008 17:58:06.410344  548894 main.go:141] libmachine: (ha-094095-m02)   <features>
	I1008 17:58:06.410353  548894 main.go:141] libmachine: (ha-094095-m02)     <acpi/>
	I1008 17:58:06.410361  548894 main.go:141] libmachine: (ha-094095-m02)     <apic/>
	I1008 17:58:06.410367  548894 main.go:141] libmachine: (ha-094095-m02)     <pae/>
	I1008 17:58:06.410371  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410376  548894 main.go:141] libmachine: (ha-094095-m02)   </features>
	I1008 17:58:06.410383  548894 main.go:141] libmachine: (ha-094095-m02)   <cpu mode='host-passthrough'>
	I1008 17:58:06.410388  548894 main.go:141] libmachine: (ha-094095-m02)   
	I1008 17:58:06.410392  548894 main.go:141] libmachine: (ha-094095-m02)   </cpu>
	I1008 17:58:06.410397  548894 main.go:141] libmachine: (ha-094095-m02)   <os>
	I1008 17:58:06.410403  548894 main.go:141] libmachine: (ha-094095-m02)     <type>hvm</type>
	I1008 17:58:06.410408  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='cdrom'/>
	I1008 17:58:06.410418  548894 main.go:141] libmachine: (ha-094095-m02)     <boot dev='hd'/>
	I1008 17:58:06.410430  548894 main.go:141] libmachine: (ha-094095-m02)     <bootmenu enable='no'/>
	I1008 17:58:06.410440  548894 main.go:141] libmachine: (ha-094095-m02)   </os>
	I1008 17:58:06.410448  548894 main.go:141] libmachine: (ha-094095-m02)   <devices>
	I1008 17:58:06.410456  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='cdrom'>
	I1008 17:58:06.410468  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/boot2docker.iso'/>
	I1008 17:58:06.410474  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hdc' bus='scsi'/>
	I1008 17:58:06.410479  548894 main.go:141] libmachine: (ha-094095-m02)       <readonly/>
	I1008 17:58:06.410485  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410515  548894 main.go:141] libmachine: (ha-094095-m02)     <disk type='file' device='disk'>
	I1008 17:58:06.410542  548894 main.go:141] libmachine: (ha-094095-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:58:06.410557  548894 main.go:141] libmachine: (ha-094095-m02)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/ha-094095-m02.rawdisk'/>
	I1008 17:58:06.410568  548894 main.go:141] libmachine: (ha-094095-m02)       <target dev='hda' bus='virtio'/>
	I1008 17:58:06.410582  548894 main.go:141] libmachine: (ha-094095-m02)     </disk>
	I1008 17:58:06.410592  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410604  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='mk-ha-094095'/>
	I1008 17:58:06.410613  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410622  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410630  548894 main.go:141] libmachine: (ha-094095-m02)     <interface type='network'>
	I1008 17:58:06.410642  548894 main.go:141] libmachine: (ha-094095-m02)       <source network='default'/>
	I1008 17:58:06.410661  548894 main.go:141] libmachine: (ha-094095-m02)       <model type='virtio'/>
	I1008 17:58:06.410673  548894 main.go:141] libmachine: (ha-094095-m02)     </interface>
	I1008 17:58:06.410683  548894 main.go:141] libmachine: (ha-094095-m02)     <serial type='pty'>
	I1008 17:58:06.410692  548894 main.go:141] libmachine: (ha-094095-m02)       <target port='0'/>
	I1008 17:58:06.410700  548894 main.go:141] libmachine: (ha-094095-m02)     </serial>
	I1008 17:58:06.410712  548894 main.go:141] libmachine: (ha-094095-m02)     <console type='pty'>
	I1008 17:58:06.410727  548894 main.go:141] libmachine: (ha-094095-m02)       <target type='serial' port='0'/>
	I1008 17:58:06.410741  548894 main.go:141] libmachine: (ha-094095-m02)     </console>
	I1008 17:58:06.410750  548894 main.go:141] libmachine: (ha-094095-m02)     <rng model='virtio'>
	I1008 17:58:06.410761  548894 main.go:141] libmachine: (ha-094095-m02)       <backend model='random'>/dev/random</backend>
	I1008 17:58:06.410771  548894 main.go:141] libmachine: (ha-094095-m02)     </rng>
	I1008 17:58:06.410780  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410787  548894 main.go:141] libmachine: (ha-094095-m02)     
	I1008 17:58:06.410796  548894 main.go:141] libmachine: (ha-094095-m02)   </devices>
	I1008 17:58:06.410804  548894 main.go:141] libmachine: (ha-094095-m02) </domain>
	I1008 17:58:06.410828  548894 main.go:141] libmachine: (ha-094095-m02) 
	I1008 17:58:06.418030  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:0f:fc:b1 in network default
	I1008 17:58:06.418595  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring networks are active...
	I1008 17:58:06.418616  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:06.419273  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network default is active
	I1008 17:58:06.419679  548894 main.go:141] libmachine: (ha-094095-m02) Ensuring network mk-ha-094095 is active
	I1008 17:58:06.420099  548894 main.go:141] libmachine: (ha-094095-m02) Getting domain xml...
	I1008 17:58:06.420774  548894 main.go:141] libmachine: (ha-094095-m02) Creating domain...
	I1008 17:58:07.625613  548894 main.go:141] libmachine: (ha-094095-m02) Waiting to get IP...
	I1008 17:58:07.626394  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.626834  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.626863  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.626812  549282 retry.go:31] will retry after 298.191028ms: waiting for machine to come up
	I1008 17:58:07.926517  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:07.926935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:07.926967  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:07.926892  549282 retry.go:31] will retry after 251.007436ms: waiting for machine to come up
	I1008 17:58:08.179311  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.179723  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.179753  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.179684  549282 retry.go:31] will retry after 369.990509ms: waiting for machine to come up
	I1008 17:58:08.551209  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:08.551664  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:08.551688  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:08.551618  549282 retry.go:31] will retry after 529.446819ms: waiting for machine to come up
	I1008 17:58:09.082289  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.082764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.082787  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.082730  549282 retry.go:31] will retry after 698.772609ms: waiting for machine to come up
	I1008 17:58:09.782428  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:09.783035  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:09.783077  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:09.782975  549282 retry.go:31] will retry after 749.123701ms: waiting for machine to come up
	I1008 17:58:10.533886  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:10.534374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:10.534406  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:10.534314  549282 retry.go:31] will retry after 748.167347ms: waiting for machine to come up
	I1008 17:58:11.284374  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:11.284764  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:11.284793  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:11.284726  549282 retry.go:31] will retry after 1.314312212s: waiting for machine to come up
	I1008 17:58:12.600256  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:12.600675  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:12.600706  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:12.600619  549282 retry.go:31] will retry after 1.264771643s: waiting for machine to come up
	I1008 17:58:13.867255  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:13.867784  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:13.867816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:13.867728  549282 retry.go:31] will retry after 2.081210662s: waiting for machine to come up
	I1008 17:58:15.950893  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:15.951309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:15.951341  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:15.951258  549282 retry.go:31] will retry after 2.823132453s: waiting for machine to come up
	I1008 17:58:18.778198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:18.778573  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:18.778605  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:18.778535  549282 retry.go:31] will retry after 2.715237967s: waiting for machine to come up
	I1008 17:58:21.495309  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:21.495754  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:21.495780  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:21.495712  549282 retry.go:31] will retry after 2.962404474s: waiting for machine to come up
	I1008 17:58:24.461815  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:24.462170  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find current IP address of domain ha-094095-m02 in network mk-ha-094095
	I1008 17:58:24.462198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | I1008 17:58:24.462131  549282 retry.go:31] will retry after 4.711440731s: waiting for machine to come up
	I1008 17:58:29.176935  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177439  548894 main.go:141] libmachine: (ha-094095-m02) Found IP for machine: 192.168.39.65
	I1008 17:58:29.177459  548894 main.go:141] libmachine: (ha-094095-m02) Reserving static IP address...
	I1008 17:58:29.177467  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has current primary IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.177881  548894 main.go:141] libmachine: (ha-094095-m02) DBG | unable to find host DHCP lease matching {name: "ha-094095-m02", mac: "52:54:00:28:c9:b2", ip: "192.168.39.65"} in network mk-ha-094095
	I1008 17:58:29.250979  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Getting to WaitForSSH function...
	I1008 17:58:29.251007  548894 main.go:141] libmachine: (ha-094095-m02) Reserved static IP address: 192.168.39.65
	I1008 17:58:29.251020  548894 main.go:141] libmachine: (ha-094095-m02) Waiting for SSH to be available...
	I1008 17:58:29.253304  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253715  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.253745  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.253826  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH client type: external
	I1008 17:58:29.253858  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa (-rw-------)
	I1008 17:58:29.253895  548894 main.go:141] libmachine: (ha-094095-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:58:29.253928  548894 main.go:141] libmachine: (ha-094095-m02) DBG | About to run SSH command:
	I1008 17:58:29.253953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | exit 0
	I1008 17:58:29.377997  548894 main.go:141] libmachine: (ha-094095-m02) DBG | SSH cmd err, output: <nil>: 
	I1008 17:58:29.378287  548894 main.go:141] libmachine: (ha-094095-m02) KVM machine creation complete!
	I1008 17:58:29.378621  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:29.379167  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379376  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:29.379500  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:58:29.379514  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 17:58:29.380658  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:58:29.380670  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:58:29.380676  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:58:29.380683  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.382734  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383074  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.383097  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.383251  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.383416  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383613  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.383753  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.383914  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.384122  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.384133  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:58:29.485427  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.485449  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:58:29.485460  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.488012  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488364  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.488395  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.488586  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.488786  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.488953  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.489087  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.489247  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.489514  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.489530  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:58:29.590445  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:58:29.590532  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:58:29.590542  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:58:29.590551  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.590782  548894 buildroot.go:166] provisioning hostname "ha-094095-m02"
	I1008 17:58:29.590806  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.591021  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.593666  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594067  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.594096  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.594246  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.594404  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594554  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.594724  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.594891  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.595109  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.595125  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m02 && echo "ha-094095-m02" | sudo tee /etc/hostname
	I1008 17:58:29.714147  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m02
	
	I1008 17:58:29.714180  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.716973  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717353  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.717384  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.717565  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.717752  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.717913  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.718050  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.718222  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:29.718416  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:29.718433  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:58:29.831586  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:58:29.831619  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:58:29.831636  548894 buildroot.go:174] setting up certificates
	I1008 17:58:29.831645  548894 provision.go:84] configureAuth start
	I1008 17:58:29.831659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetMachineName
	I1008 17:58:29.831944  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:29.834827  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835217  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.835237  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.835436  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.837816  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838198  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.838223  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.838374  548894 provision.go:143] copyHostCerts
	I1008 17:58:29.838406  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838440  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:58:29.838448  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:58:29.838513  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:58:29.838598  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838615  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:58:29.838620  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:58:29.838643  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:58:29.838682  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838698  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:58:29.838704  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:58:29.838730  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:58:29.838774  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m02 san=[127.0.0.1 192.168.39.65 ha-094095-m02 localhost minikube]
	I1008 17:58:29.938554  548894 provision.go:177] copyRemoteCerts
	I1008 17:58:29.938614  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:58:29.938646  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:29.941344  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941644  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:29.941673  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:29.941805  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:29.942003  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:29.942163  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:29.942301  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.024548  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:58:30.024622  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:58:30.049270  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:58:30.049353  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:58:30.073294  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:58:30.073363  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 17:58:30.097034  548894 provision.go:87] duration metric: took 265.374667ms to configureAuth
	I1008 17:58:30.097066  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:58:30.097258  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:30.097336  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.100086  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100367  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.100397  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.100547  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.100709  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.100901  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.101076  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.101293  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.101528  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.101554  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:58:30.316444  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:58:30.316471  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:58:30.316479  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetURL
	I1008 17:58:30.317802  548894 main.go:141] libmachine: (ha-094095-m02) DBG | Using libvirt version 6000000
	I1008 17:58:30.320137  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320544  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.320587  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.320709  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:58:30.320718  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:58:30.320726  548894 client.go:171] duration metric: took 24.527519698s to LocalClient.Create
	I1008 17:58:30.320756  548894 start.go:167] duration metric: took 24.527598536s to libmachine.API.Create "ha-094095"
	I1008 17:58:30.320770  548894 start.go:293] postStartSetup for "ha-094095-m02" (driver="kvm2")
	I1008 17:58:30.320783  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:58:30.320822  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.321070  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:58:30.321097  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.323268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323601  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.323630  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.323770  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.323934  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.324073  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.324173  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.408962  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:58:30.413084  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:58:30.413110  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:58:30.413178  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:58:30.413266  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:58:30.413279  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:58:30.413385  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:58:30.423213  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:30.446502  548894 start.go:296] duration metric: took 125.715217ms for postStartSetup
	I1008 17:58:30.446572  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetConfigRaw
	I1008 17:58:30.447199  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.449851  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450235  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.450268  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.450469  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:58:30.450701  548894 start.go:128] duration metric: took 24.675682473s to createHost
	I1008 17:58:30.450743  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.453038  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453348  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.453375  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.453496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.453697  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.453857  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.454010  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.454159  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:58:30.454400  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I1008 17:58:30.454410  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:58:30.559077  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410310.517666608
	
	I1008 17:58:30.559107  548894 fix.go:216] guest clock: 1728410310.517666608
	I1008 17:58:30.559114  548894 fix.go:229] Guest: 2024-10-08 17:58:30.517666608 +0000 UTC Remote: 2024-10-08 17:58:30.45071757 +0000 UTC m=+71.541677784 (delta=66.949038ms)
	I1008 17:58:30.559131  548894 fix.go:200] guest clock delta is within tolerance: 66.949038ms
	I1008 17:58:30.559136  548894 start.go:83] releasing machines lock for "ha-094095-m02", held for 24.78424013s
	I1008 17:58:30.559157  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.559409  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:30.562379  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.562717  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.562741  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.564989  548894 out.go:177] * Found network options:
	I1008 17:58:30.566270  548894 out.go:177]   - NO_PROXY=192.168.39.99
	W1008 17:58:30.567463  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.567496  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568070  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568303  548894 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 17:58:30.568423  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:58:30.568473  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	W1008 17:58:30.568503  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:58:30.568602  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:58:30.568624  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 17:58:30.570953  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571141  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571291  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571315  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571468  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:30.571489  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:30.571498  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571659  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 17:58:30.571671  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 17:58:30.571841  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572011  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 17:58:30.572054  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.572151  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 17:58:30.807329  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:58:30.813213  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:58:30.813287  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:58:30.829683  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:58:30.829708  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:58:30.829790  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:58:30.845021  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:58:30.858172  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:58:30.858226  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:58:30.871442  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:58:30.884200  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:58:31.001594  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:58:31.145565  548894 docker.go:233] disabling docker service ...
	I1008 17:58:31.145647  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:58:31.159802  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:58:31.172545  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:58:31.317614  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:58:31.428085  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:58:31.441474  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:58:31.458921  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:58:31.458992  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.469332  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:58:31.469401  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.479553  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.489606  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.499476  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:58:31.509618  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.519561  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.536177  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:58:31.546145  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:58:31.555445  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:58:31.555504  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:58:31.568401  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:58:31.577660  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:31.690206  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:58:31.785577  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:58:31.785668  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:58:31.790440  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:58:31.790488  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:58:31.794008  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:58:31.830698  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:58:31.830779  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.860448  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:58:31.888491  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:58:31.889686  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:58:31.890999  548894 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 17:58:31.893749  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894085  548894 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:58:20 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 17:58:31.894111  548894 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 17:58:31.894298  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:58:31.898872  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:31.911229  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:58:31.911431  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:31.911784  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.911827  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.926475  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I1008 17:58:31.926940  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.927427  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.927446  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.927739  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.927928  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:58:31.929331  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:31.929604  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:31.929636  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:31.944569  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1008 17:58:31.945071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:31.945554  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:31.945577  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:31.945884  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:31.946077  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:31.946243  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.65
	I1008 17:58:31.946257  548894 certs.go:194] generating shared ca certs ...
	I1008 17:58:31.946274  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:31.946447  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:58:31.946488  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:58:31.946503  548894 certs.go:256] generating profile certs ...
	I1008 17:58:31.946591  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:58:31.946614  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9
	I1008 17:58:31.946631  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.254]
	I1008 17:58:32.004758  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 ...
	I1008 17:58:32.004782  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9: {Name:mk5f5c650d9dd5d2249fb843b585c028b52aecec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.004936  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 ...
	I1008 17:58:32.004948  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9: {Name:mk72de6dbb470530f019dc623057311deeb636c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:58:32.005014  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:58:32.005145  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.82f63ae9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:58:32.005267  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:58:32.005283  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:58:32.005296  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:58:32.005308  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:58:32.005321  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:58:32.005335  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:58:32.005348  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:58:32.005359  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:58:32.005370  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:58:32.005421  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:58:32.005451  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:58:32.005460  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:58:32.005496  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:58:32.005520  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:58:32.005541  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:58:32.005579  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:58:32.005605  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.005619  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.005631  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.005665  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:32.008694  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009085  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:32.009115  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:32.009227  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:32.009422  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:32.009576  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:32.009716  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:32.082578  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:58:32.087536  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:58:32.098777  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:58:32.102888  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:58:32.112522  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:58:32.116400  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:58:32.126625  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:58:32.130706  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:58:32.141238  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:58:32.145206  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:58:32.154909  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:58:32.159011  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:58:32.169341  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:58:32.193388  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:58:32.215733  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:58:32.237995  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:58:32.260545  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 17:58:32.283295  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 17:58:32.305577  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:58:32.327963  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:58:32.350081  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:58:32.372344  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:58:32.394280  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:58:32.416064  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:58:32.431348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:58:32.446729  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:58:32.462348  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:58:32.479908  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:58:32.495280  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:58:32.510638  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:58:32.526014  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:58:32.531514  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:58:32.541262  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545663  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.545708  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:58:32.551139  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:58:32.561010  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:58:32.570960  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575030  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.575086  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:58:32.580417  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:58:32.590088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:58:32.600566  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604834  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.604876  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:58:32.610374  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:58:32.620430  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:58:32.624404  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:58:32.624460  548894 kubeadm.go:934] updating node {m02 192.168.39.65 8443 v1.31.1 crio true true} ...
	I1008 17:58:32.624566  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:58:32.624597  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:58:32.624632  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:58:32.640207  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:58:32.640276  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:58:32.640318  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.651418  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:58:32.651482  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:58:32.660840  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:58:32.660867  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660925  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:58:32.660955  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1008 17:58:32.660974  548894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1008 17:58:32.665332  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:58:32.665355  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:58:33.330557  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.330641  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:58:33.335582  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:58:33.335623  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:58:33.372522  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:58:33.392996  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.393114  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:58:33.400473  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:58:33.400509  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:58:33.862223  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:58:33.873974  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 17:58:33.890552  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:58:33.907049  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:58:33.923719  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:58:33.927643  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:58:33.940952  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:34.068619  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:34.085108  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:58:34.085464  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:58:34.085525  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:58:34.100590  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I1008 17:58:34.101071  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:58:34.101641  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:58:34.101663  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:58:34.101990  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:58:34.102197  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:58:34.102362  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:58:34.102466  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:58:34.102489  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:58:34.105069  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105405  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:58:34.105432  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:58:34.105659  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:58:34.105846  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:58:34.106036  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:58:34.106174  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:58:34.253303  548894 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:34.253365  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443"
	I1008 17:58:55.647352  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token l4584u.pn9wab4hiynnfk20 --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m02 --control-plane --apiserver-advertise-address=192.168.39.65 --apiserver-bind-port=8443": (21.393954296s)
	I1008 17:58:55.647399  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 17:58:56.179900  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m02 minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 17:58:56.351414  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 17:58:56.472891  548894 start.go:319] duration metric: took 22.370522266s to joinCluster
	I1008 17:58:56.472999  548894 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:58:56.473310  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:58:56.474358  548894 out.go:177] * Verifying Kubernetes components...
	I1008 17:58:56.475511  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:58:56.748460  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:58:56.780862  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:58:56.781184  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 17:58:56.781253  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 17:58:56.781476  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:58:56.781593  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:56.781601  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:56.781608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:56.781612  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:56.791092  548894 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1008 17:58:57.281764  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.281787  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.281795  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.281800  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.293233  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:58:57.782526  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:57.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:57.782566  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:57.782571  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:57.786781  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.281871  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.281899  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.281911  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.281917  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.285022  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:58:58.781938  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:58.781972  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:58.781983  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:58.781989  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:58.786159  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:58.786795  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:58:59.282562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.282596  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.282609  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.282619  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.286768  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:58:59.781827  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:58:59.781856  548894 round_trippers.go:469] Request Headers:
	I1008 17:58:59.781867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:58:59.781872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:58:59.785211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:00.282380  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.282406  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.282417  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.282424  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.285358  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:00.782500  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:00.782529  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:00.782538  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:00.782541  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:00.785321  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.281680  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.281702  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.281711  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.281717  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.284371  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:01.285041  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:01.782411  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:01.782443  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:01.782453  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:01.782458  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:01.785485  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.282181  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.282203  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.282212  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.282217  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.285355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:02.782528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:02.782554  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:02.782565  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:02.782571  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:02.785688  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.282604  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.282627  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.282638  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.282646  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.286199  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:03.286918  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:03.782407  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:03.782431  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:03.782441  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:03.782447  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:03.785212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:04.282369  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.282392  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.282400  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.282404  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.285540  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:04.781799  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:04.781818  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:04.781831  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:04.781835  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:04.785050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.282133  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.282156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.282163  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.282166  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.285211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:05.782060  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:05.782079  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:05.782090  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:05.782097  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:05.784932  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:05.785622  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:06.282491  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.282513  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.282521  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.282524  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.285446  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:06.782400  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:06.782424  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:06.782433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:06.782439  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:06.787263  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:07.282189  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.282221  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.282227  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.282231  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.285027  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:07.781864  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:07.781885  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:07.781895  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:07.781901  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:07.784237  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:08.281994  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.282014  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.282022  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.282027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.285398  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:08.286042  548894 node_ready.go:53] node "ha-094095-m02" has status "Ready":"False"
	I1008 17:59:08.782428  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:08.782454  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:08.782466  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:08.782472  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:08.785709  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.282163  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.282193  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.282204  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.282211  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.285429  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:09.782392  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:09.782415  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:09.782423  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:09.782427  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:09.785404  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.282376  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.282398  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.282407  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.282410  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.293860  548894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1008 17:59:10.295059  548894 node_ready.go:49] node "ha-094095-m02" has status "Ready":"True"
	I1008 17:59:10.295090  548894 node_ready.go:38] duration metric: took 13.513574743s for node "ha-094095-m02" to be "Ready" ...
	I1008 17:59:10.295105  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:10.295211  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:10.295228  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.295239  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.295243  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.309090  548894 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1008 17:59:10.317441  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.317556  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 17:59:10.317568  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.317578  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.317586  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.321472  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.322135  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.322156  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.322167  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.322174  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.328845  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.329380  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.329405  548894 pod_ready.go:82] duration metric: took 11.930599ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329419  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.329498  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 17:59:10.329509  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.329520  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.329528  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.336402  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 17:59:10.337294  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.337313  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.337323  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.337328  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.340848  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.341320  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.341341  548894 pod_ready.go:82] duration metric: took 11.909652ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341354  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.341421  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 17:59:10.341432  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.341442  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.341450  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.343586  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.344175  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.344191  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.344198  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.344202  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.346350  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.347112  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.347134  548894 pod_ready.go:82] duration metric: took 5.772495ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347147  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.347220  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 17:59:10.347231  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.347241  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.347249  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.349293  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.349880  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:10.349897  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.349916  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.349921  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.352009  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:10.352470  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.352496  548894 pod_ready.go:82] duration metric: took 5.340167ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.352518  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.482865  548894 request.go:632] Waited for 130.276413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482957  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 17:59:10.482968  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.482977  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.482983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.486050  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.683204  548894 request.go:632] Waited for 196.383245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683286  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:10.683291  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.683299  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.683302  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.686545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:10.687112  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:10.687134  548894 pod_ready.go:82] duration metric: took 334.609013ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.687145  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:10.882406  548894 request.go:632] Waited for 195.187252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882484  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 17:59:10.882489  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:10.882498  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:10.882503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:10.885610  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.082756  548894 request.go:632] Waited for 196.397183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082846  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.082857  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.082869  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.082874  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.085950  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.086623  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.086650  548894 pod_ready.go:82] duration metric: took 399.497445ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.086663  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.282438  548894 request.go:632] Waited for 195.669677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282535  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 17:59:11.282544  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.282552  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.282557  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.285746  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.482936  548894 request.go:632] Waited for 196.360528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483014  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:11.483021  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.483030  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.483037  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.486267  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.486823  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.486845  548894 pod_ready.go:82] duration metric: took 400.172946ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.486856  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.683063  548894 request.go:632] Waited for 196.099154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683155  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 17:59:11.683168  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.683181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.683192  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.686310  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.882490  548894 request.go:632] Waited for 195.281424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:11.882569  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:11.882580  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:11.882587  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:11.885732  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:11.886206  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:11.886228  548894 pod_ready.go:82] duration metric: took 399.364956ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:11.886243  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.083083  548894 request.go:632] Waited for 196.741087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083174  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 17:59:12.083181  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.083193  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.083199  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.086438  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.282815  548894 request.go:632] Waited for 195.357265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282879  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:12.282884  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.282892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.282897  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.286211  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.286955  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.286978  548894 pod_ready.go:82] duration metric: took 400.728245ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.286989  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.483080  548894 request.go:632] Waited for 196.002385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483159  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 17:59:12.483167  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.483181  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.483193  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.486235  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.683233  548894 request.go:632] Waited for 196.354052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683315  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:12.683322  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.683334  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.683341  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.686419  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:12.687164  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:12.687194  548894 pod_ready.go:82] duration metric: took 400.198282ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.687210  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:12.883073  548894 request.go:632] Waited for 195.753943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883139  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 17:59:12.883145  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:12.883152  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:12.883156  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:12.886291  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.083210  548894 request.go:632] Waited for 196.369192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083288  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 17:59:13.083296  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.083304  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.083308  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.086479  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.087168  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.087188  548894 pod_ready.go:82] duration metric: took 399.968628ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.087198  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.283359  548894 request.go:632] Waited for 196.068525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283420  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 17:59:13.283425  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.283433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.283438  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.286484  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.482457  548894 request.go:632] Waited for 195.25665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482575  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 17:59:13.482588  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.482599  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.482605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.485671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.486395  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 17:59:13.486417  548894 pod_ready.go:82] duration metric: took 399.212171ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 17:59:13.486429  548894 pod_ready.go:39] duration metric: took 3.191309926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 17:59:13.486448  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 17:59:13.486516  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 17:59:13.501134  548894 api_server.go:72] duration metric: took 17.028092431s to wait for apiserver process to appear ...
	I1008 17:59:13.501165  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 17:59:13.501208  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 17:59:13.505717  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 17:59:13.506345  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 17:59:13.506369  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.506381  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.506389  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.508475  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 17:59:13.508579  548894 api_server.go:141] control plane version: v1.31.1
	I1008 17:59:13.508596  548894 api_server.go:131] duration metric: took 7.424538ms to wait for apiserver health ...
	I1008 17:59:13.508606  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 17:59:13.682454  548894 request.go:632] Waited for 173.762668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682527  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:13.682532  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.682541  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.682546  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.687595  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 17:59:13.692646  548894 system_pods.go:59] 17 kube-system pods found
	I1008 17:59:13.692692  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:13.692702  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:13.692707  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:13.692713  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:13.692718  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:13.692723  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:13.692730  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:13.692735  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:13.692744  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:13.692750  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:13.692755  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:13.692760  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:13.692765  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:13.692774  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:13.692778  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:13.692783  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:13.692788  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:13.692796  548894 system_pods.go:74] duration metric: took 184.183414ms to wait for pod list to return data ...
	I1008 17:59:13.692811  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 17:59:13.883264  548894 request.go:632] Waited for 190.350103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883340  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 17:59:13.883352  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:13.883364  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:13.883373  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:13.887200  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:13.887443  548894 default_sa.go:45] found service account: "default"
	I1008 17:59:13.887464  548894 default_sa.go:55] duration metric: took 194.642236ms for default service account to be created ...
	I1008 17:59:13.887473  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 17:59:14.083128  548894 request.go:632] Waited for 195.575348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083197  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 17:59:14.083204  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.083215  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.083224  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.087502  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 17:59:14.091850  548894 system_pods.go:86] 17 kube-system pods found
	I1008 17:59:14.091874  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 17:59:14.091880  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 17:59:14.091884  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 17:59:14.091888  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 17:59:14.091895  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 17:59:14.091898  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 17:59:14.091903  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 17:59:14.091909  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 17:59:14.091915  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 17:59:14.091921  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 17:59:14.091929  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 17:59:14.091935  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 17:59:14.091943  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 17:59:14.091948  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 17:59:14.091954  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 17:59:14.091958  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 17:59:14.091961  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 17:59:14.091969  548894 system_pods.go:126] duration metric: took 204.490014ms to wait for k8s-apps to be running ...
	I1008 17:59:14.091978  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 17:59:14.092031  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:14.107751  548894 system_svc.go:56] duration metric: took 15.765669ms WaitForService to wait for kubelet
	I1008 17:59:14.107782  548894 kubeadm.go:582] duration metric: took 17.634744099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 17:59:14.107804  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 17:59:14.283342  548894 request.go:632] Waited for 175.43028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283397  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 17:59:14.283402  548894 round_trippers.go:469] Request Headers:
	I1008 17:59:14.283410  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 17:59:14.283415  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 17:59:14.286910  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 17:59:14.287827  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287854  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287877  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 17:59:14.287883  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 17:59:14.287892  548894 node_conditions.go:105] duration metric: took 180.082842ms to run NodePressure ...
	I1008 17:59:14.287908  548894 start.go:241] waiting for startup goroutines ...
	I1008 17:59:14.287939  548894 start.go:255] writing updated cluster config ...
	I1008 17:59:14.289665  548894 out.go:201] 
	I1008 17:59:14.290934  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:14.291033  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.292598  548894 out.go:177] * Starting "ha-094095-m03" control-plane node in "ha-094095" cluster
	I1008 17:59:14.293602  548894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 17:59:14.293620  548894 cache.go:56] Caching tarball of preloaded images
	I1008 17:59:14.293722  548894 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 17:59:14.293741  548894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 17:59:14.293865  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:14.294036  548894 start.go:360] acquireMachinesLock for ha-094095-m03: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 17:59:14.294084  548894 start.go:364] duration metric: took 28.442µs to acquireMachinesLock for "ha-094095-m03"
	I1008 17:59:14.294116  548894 start.go:93] Provisioning new machine with config: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:14.294207  548894 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1008 17:59:14.295495  548894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 17:59:14.295567  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:14.295608  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:14.310848  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I1008 17:59:14.311356  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:14.311872  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:14.311899  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:14.312212  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:14.312396  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:14.312674  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:14.312844  548894 start.go:159] libmachine.API.Create for "ha-094095" (driver="kvm2")
	I1008 17:59:14.312876  548894 client.go:168] LocalClient.Create starting
	I1008 17:59:14.312902  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 17:59:14.312934  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.312948  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313000  548894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 17:59:14.313019  548894 main.go:141] libmachine: Decoding PEM data...
	I1008 17:59:14.313027  548894 main.go:141] libmachine: Parsing certificate...
	I1008 17:59:14.313042  548894 main.go:141] libmachine: Running pre-create checks...
	I1008 17:59:14.313050  548894 main.go:141] libmachine: (ha-094095-m03) Calling .PreCreateCheck
	I1008 17:59:14.313206  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:14.313583  548894 main.go:141] libmachine: Creating machine...
	I1008 17:59:14.313600  548894 main.go:141] libmachine: (ha-094095-m03) Calling .Create
	I1008 17:59:14.313739  548894 main.go:141] libmachine: (ha-094095-m03) Creating KVM machine...
	I1008 17:59:14.314906  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing default KVM network
	I1008 17:59:14.315074  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found existing private KVM network mk-ha-094095
	I1008 17:59:14.315221  548894 main.go:141] libmachine: (ha-094095-m03) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.315247  548894 main.go:141] libmachine: (ha-094095-m03) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:59:14.315327  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.315217  549655 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.315388  548894 main.go:141] libmachine: (ha-094095-m03) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 17:59:14.593209  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.593087  549655 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa...
	I1008 17:59:14.821442  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821329  549655 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk...
	I1008 17:59:14.821476  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing magic tar header
	I1008 17:59:14.821491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Writing SSH key tar header
	I1008 17:59:14.821502  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:14.821478  549655 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 ...
	I1008 17:59:14.821659  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03
	I1008 17:59:14.821694  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03 (perms=drwx------)
	I1008 17:59:14.821705  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 17:59:14.821719  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:59:14.821729  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 17:59:14.821740  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 17:59:14.821750  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home/jenkins
	I1008 17:59:14.821762  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 17:59:14.821772  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Checking permissions on dir: /home
	I1008 17:59:14.821784  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 17:59:14.821794  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Skipping /home - not owner
	I1008 17:59:14.821808  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 17:59:14.821819  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 17:59:14.821836  548894 main.go:141] libmachine: (ha-094095-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 17:59:14.821846  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:14.822739  548894 main.go:141] libmachine: (ha-094095-m03) define libvirt domain using xml: 
	I1008 17:59:14.822758  548894 main.go:141] libmachine: (ha-094095-m03) <domain type='kvm'>
	I1008 17:59:14.822767  548894 main.go:141] libmachine: (ha-094095-m03)   <name>ha-094095-m03</name>
	I1008 17:59:14.822774  548894 main.go:141] libmachine: (ha-094095-m03)   <memory unit='MiB'>2200</memory>
	I1008 17:59:14.822782  548894 main.go:141] libmachine: (ha-094095-m03)   <vcpu>2</vcpu>
	I1008 17:59:14.822792  548894 main.go:141] libmachine: (ha-094095-m03)   <features>
	I1008 17:59:14.822799  548894 main.go:141] libmachine: (ha-094095-m03)     <acpi/>
	I1008 17:59:14.822805  548894 main.go:141] libmachine: (ha-094095-m03)     <apic/>
	I1008 17:59:14.822815  548894 main.go:141] libmachine: (ha-094095-m03)     <pae/>
	I1008 17:59:14.822822  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.822827  548894 main.go:141] libmachine: (ha-094095-m03)   </features>
	I1008 17:59:14.822834  548894 main.go:141] libmachine: (ha-094095-m03)   <cpu mode='host-passthrough'>
	I1008 17:59:14.822838  548894 main.go:141] libmachine: (ha-094095-m03)   
	I1008 17:59:14.822842  548894 main.go:141] libmachine: (ha-094095-m03)   </cpu>
	I1008 17:59:14.822847  548894 main.go:141] libmachine: (ha-094095-m03)   <os>
	I1008 17:59:14.822857  548894 main.go:141] libmachine: (ha-094095-m03)     <type>hvm</type>
	I1008 17:59:14.822865  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='cdrom'/>
	I1008 17:59:14.822879  548894 main.go:141] libmachine: (ha-094095-m03)     <boot dev='hd'/>
	I1008 17:59:14.822888  548894 main.go:141] libmachine: (ha-094095-m03)     <bootmenu enable='no'/>
	I1008 17:59:14.822897  548894 main.go:141] libmachine: (ha-094095-m03)   </os>
	I1008 17:59:14.822903  548894 main.go:141] libmachine: (ha-094095-m03)   <devices>
	I1008 17:59:14.822910  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='cdrom'>
	I1008 17:59:14.822919  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/boot2docker.iso'/>
	I1008 17:59:14.822926  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hdc' bus='scsi'/>
	I1008 17:59:14.822931  548894 main.go:141] libmachine: (ha-094095-m03)       <readonly/>
	I1008 17:59:14.822939  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.822951  548894 main.go:141] libmachine: (ha-094095-m03)     <disk type='file' device='disk'>
	I1008 17:59:14.822984  548894 main.go:141] libmachine: (ha-094095-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 17:59:14.822998  548894 main.go:141] libmachine: (ha-094095-m03)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/ha-094095-m03.rawdisk'/>
	I1008 17:59:14.823004  548894 main.go:141] libmachine: (ha-094095-m03)       <target dev='hda' bus='virtio'/>
	I1008 17:59:14.823008  548894 main.go:141] libmachine: (ha-094095-m03)     </disk>
	I1008 17:59:14.823012  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823018  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='mk-ha-094095'/>
	I1008 17:59:14.823028  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823037  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823050  548894 main.go:141] libmachine: (ha-094095-m03)     <interface type='network'>
	I1008 17:59:14.823062  548894 main.go:141] libmachine: (ha-094095-m03)       <source network='default'/>
	I1008 17:59:14.823072  548894 main.go:141] libmachine: (ha-094095-m03)       <model type='virtio'/>
	I1008 17:59:14.823080  548894 main.go:141] libmachine: (ha-094095-m03)     </interface>
	I1008 17:59:14.823089  548894 main.go:141] libmachine: (ha-094095-m03)     <serial type='pty'>
	I1008 17:59:14.823097  548894 main.go:141] libmachine: (ha-094095-m03)       <target port='0'/>
	I1008 17:59:14.823105  548894 main.go:141] libmachine: (ha-094095-m03)     </serial>
	I1008 17:59:14.823114  548894 main.go:141] libmachine: (ha-094095-m03)     <console type='pty'>
	I1008 17:59:14.823128  548894 main.go:141] libmachine: (ha-094095-m03)       <target type='serial' port='0'/>
	I1008 17:59:14.823139  548894 main.go:141] libmachine: (ha-094095-m03)     </console>
	I1008 17:59:14.823147  548894 main.go:141] libmachine: (ha-094095-m03)     <rng model='virtio'>
	I1008 17:59:14.823159  548894 main.go:141] libmachine: (ha-094095-m03)       <backend model='random'>/dev/random</backend>
	I1008 17:59:14.823166  548894 main.go:141] libmachine: (ha-094095-m03)     </rng>
	I1008 17:59:14.823173  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823181  548894 main.go:141] libmachine: (ha-094095-m03)     
	I1008 17:59:14.823189  548894 main.go:141] libmachine: (ha-094095-m03)   </devices>
	I1008 17:59:14.823202  548894 main.go:141] libmachine: (ha-094095-m03) </domain>
	I1008 17:59:14.823214  548894 main.go:141] libmachine: (ha-094095-m03) 
	I1008 17:59:14.829896  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:d4:34:b1 in network default
	I1008 17:59:14.830619  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:14.830642  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring networks are active...
	I1008 17:59:14.831385  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network default is active
	I1008 17:59:14.831784  548894 main.go:141] libmachine: (ha-094095-m03) Ensuring network mk-ha-094095 is active
	I1008 17:59:14.832205  548894 main.go:141] libmachine: (ha-094095-m03) Getting domain xml...
	I1008 17:59:14.832929  548894 main.go:141] libmachine: (ha-094095-m03) Creating domain...
	I1008 17:59:16.039421  548894 main.go:141] libmachine: (ha-094095-m03) Waiting to get IP...
	I1008 17:59:16.040212  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.040604  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.040627  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.040576  549655 retry.go:31] will retry after 310.617511ms: waiting for machine to come up
	I1008 17:59:16.353098  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.353638  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.353666  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.353600  549655 retry.go:31] will retry after 370.013025ms: waiting for machine to come up
	I1008 17:59:16.725039  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:16.725471  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:16.725511  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:16.725419  549655 retry.go:31] will retry after 335.057817ms: waiting for machine to come up
	I1008 17:59:17.061762  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.062145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.062168  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.062095  549655 retry.go:31] will retry after 553.959397ms: waiting for machine to come up
	I1008 17:59:17.617869  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:17.618404  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:17.618431  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:17.618345  549655 retry.go:31] will retry after 506.335647ms: waiting for machine to come up
	I1008 17:59:18.125977  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.126353  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.126384  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.126291  549655 retry.go:31] will retry after 734.408354ms: waiting for machine to come up
	I1008 17:59:18.862107  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:18.862605  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:18.862632  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:18.862544  549655 retry.go:31] will retry after 1.020122482s: waiting for machine to come up
	I1008 17:59:19.884038  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:19.884492  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:19.884530  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:19.884425  549655 retry.go:31] will retry after 1.125801014s: waiting for machine to come up
	I1008 17:59:21.011532  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:21.011993  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:21.012020  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:21.011944  549655 retry.go:31] will retry after 1.660141079s: waiting for machine to come up
	I1008 17:59:22.673143  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:22.673540  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:22.673570  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:22.673522  549655 retry.go:31] will retry after 1.579793422s: waiting for machine to come up
	I1008 17:59:24.255498  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:24.256062  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:24.256089  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:24.256014  549655 retry.go:31] will retry after 2.586780396s: waiting for machine to come up
	I1008 17:59:26.845780  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:26.846232  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:26.846256  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:26.846181  549655 retry.go:31] will retry after 2.461770006s: waiting for machine to come up
	I1008 17:59:29.309639  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:29.310146  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:29.310176  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:29.310088  549655 retry.go:31] will retry after 4.519355473s: waiting for machine to come up
	I1008 17:59:33.833985  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:33.834361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find current IP address of domain ha-094095-m03 in network mk-ha-094095
	I1008 17:59:33.834386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | I1008 17:59:33.834293  549655 retry.go:31] will retry after 3.493644498s: waiting for machine to come up
	I1008 17:59:37.331421  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.331914  548894 main.go:141] libmachine: (ha-094095-m03) Found IP for machine: 192.168.39.194
	I1008 17:59:37.331939  548894 main.go:141] libmachine: (ha-094095-m03) Reserving static IP address...
	I1008 17:59:37.331956  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has current primary IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.332395  548894 main.go:141] libmachine: (ha-094095-m03) DBG | unable to find host DHCP lease matching {name: "ha-094095-m03", mac: "52:54:00:e6:8f:e3", ip: "192.168.39.194"} in network mk-ha-094095
	I1008 17:59:37.404136  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Getting to WaitForSSH function...
	I1008 17:59:37.404175  548894 main.go:141] libmachine: (ha-094095-m03) Reserved static IP address: 192.168.39.194
	I1008 17:59:37.404188  548894 main.go:141] libmachine: (ha-094095-m03) Waiting for SSH to be available...
	I1008 17:59:37.406755  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407114  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.407145  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.407257  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH client type: external
	I1008 17:59:37.407295  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa (-rw-------)
	I1008 17:59:37.407348  548894 main.go:141] libmachine: (ha-094095-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 17:59:37.407377  548894 main.go:141] libmachine: (ha-094095-m03) DBG | About to run SSH command:
	I1008 17:59:37.407391  548894 main.go:141] libmachine: (ha-094095-m03) DBG | exit 0
	I1008 17:59:37.534234  548894 main.go:141] libmachine: (ha-094095-m03) DBG | SSH cmd err, output: <nil>: 
	I1008 17:59:37.534542  548894 main.go:141] libmachine: (ha-094095-m03) KVM machine creation complete!
	I1008 17:59:37.535062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:37.535615  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.535835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:37.536043  548894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 17:59:37.536062  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetState
	I1008 17:59:37.537459  548894 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 17:59:37.537477  548894 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 17:59:37.537484  548894 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 17:59:37.537492  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.539962  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540458  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.540491  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.540661  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.540847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.540985  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.541188  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.541386  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.541674  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.541690  548894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 17:59:37.649416  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:37.649443  548894 main.go:141] libmachine: Detecting the provisioner...
	I1008 17:59:37.649452  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.652360  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652754  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.652783  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.652904  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.653099  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653253  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.653372  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.653521  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.653691  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.653700  548894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 17:59:37.763719  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 17:59:37.763801  548894 main.go:141] libmachine: found compatible host: buildroot
	I1008 17:59:37.763820  548894 main.go:141] libmachine: Provisioning with buildroot...
	I1008 17:59:37.763835  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764121  548894 buildroot.go:166] provisioning hostname "ha-094095-m03"
	I1008 17:59:37.764156  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:37.764347  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.766798  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.767194  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.767402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.767617  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767784  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.767982  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.768161  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.768362  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.768381  548894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095-m03 && echo "ha-094095-m03" | sudo tee /etc/hostname
	I1008 17:59:37.892598  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095-m03
	
	I1008 17:59:37.892638  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:37.895717  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896104  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:37.896139  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:37.896357  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:37.896582  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896764  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:37.896930  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:37.897130  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:37.897346  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:37.897371  548894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 17:59:38.015892  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 17:59:38.015942  548894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 17:59:38.015964  548894 buildroot.go:174] setting up certificates
	I1008 17:59:38.015976  548894 provision.go:84] configureAuth start
	I1008 17:59:38.015994  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetMachineName
	I1008 17:59:38.016285  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.018925  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019329  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.019361  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.019480  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.021681  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022085  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.022109  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.022295  548894 provision.go:143] copyHostCerts
	I1008 17:59:38.022355  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022398  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 17:59:38.022410  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 17:59:38.022497  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 17:59:38.022612  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022639  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 17:59:38.022646  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 17:59:38.022684  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 17:59:38.022749  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022772  548894 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 17:59:38.022780  548894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 17:59:38.022817  548894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 17:59:38.022905  548894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095-m03 san=[127.0.0.1 192.168.39.194 ha-094095-m03 localhost minikube]
	I1008 17:59:38.409825  548894 provision.go:177] copyRemoteCerts
	I1008 17:59:38.409880  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 17:59:38.409906  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.412474  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.412819  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.412850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.413057  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.413233  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.413436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.413614  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.500707  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 17:59:38.500793  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 17:59:38.526942  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 17:59:38.527009  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 17:59:38.552205  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 17:59:38.552273  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 17:59:38.575397  548894 provision.go:87] duration metric: took 559.401387ms to configureAuth
	I1008 17:59:38.575426  548894 buildroot.go:189] setting minikube options for container-runtime
	I1008 17:59:38.575799  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:38.575895  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.579241  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579746  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.579778  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.579962  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.580162  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580375  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.580557  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.580756  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.580976  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.581001  548894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 17:59:38.814916  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 17:59:38.814943  548894 main.go:141] libmachine: Checking connection to Docker...
	I1008 17:59:38.814951  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetURL
	I1008 17:59:38.816195  548894 main.go:141] libmachine: (ha-094095-m03) DBG | Using libvirt version 6000000
	I1008 17:59:38.818782  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819155  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.819181  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.819313  548894 main.go:141] libmachine: Docker is up and running!
	I1008 17:59:38.819324  548894 main.go:141] libmachine: Reticulating splines...
	I1008 17:59:38.819331  548894 client.go:171] duration metric: took 24.506447945s to LocalClient.Create
	I1008 17:59:38.819354  548894 start.go:167] duration metric: took 24.506513664s to libmachine.API.Create "ha-094095"
	I1008 17:59:38.819366  548894 start.go:293] postStartSetup for "ha-094095-m03" (driver="kvm2")
	I1008 17:59:38.819379  548894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 17:59:38.819402  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:38.819667  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 17:59:38.819695  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.822386  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.822850  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.822878  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.823079  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.823255  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.823425  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.823576  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:38.911016  548894 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 17:59:38.915516  548894 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 17:59:38.915544  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 17:59:38.915616  548894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 17:59:38.915703  548894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 17:59:38.915717  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 17:59:38.915843  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 17:59:38.927016  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:38.951613  548894 start.go:296] duration metric: took 132.232716ms for postStartSetup
	I1008 17:59:38.951663  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetConfigRaw
	I1008 17:59:38.952254  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:38.954773  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955177  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.955206  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.955479  548894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 17:59:38.955726  548894 start.go:128] duration metric: took 24.661507137s to createHost
	I1008 17:59:38.955754  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:38.957824  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958152  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:38.958180  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:38.958260  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:38.958436  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958614  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:38.958783  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:38.958982  548894 main.go:141] libmachine: Using SSH client type: native
	I1008 17:59:38.959149  548894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I1008 17:59:38.959198  548894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 17:59:39.066802  548894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410379.042145365
	
	I1008 17:59:39.066831  548894 fix.go:216] guest clock: 1728410379.042145365
	I1008 17:59:39.066838  548894 fix.go:229] Guest: 2024-10-08 17:59:39.042145365 +0000 UTC Remote: 2024-10-08 17:59:38.955741605 +0000 UTC m=+140.046701810 (delta=86.40376ms)
	I1008 17:59:39.066854  548894 fix.go:200] guest clock delta is within tolerance: 86.40376ms
	I1008 17:59:39.066859  548894 start.go:83] releasing machines lock for "ha-094095-m03", held for 24.772764688s
	I1008 17:59:39.066879  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.067121  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:39.069711  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.070086  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.070113  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.072386  548894 out.go:177] * Found network options:
	I1008 17:59:39.073842  548894 out.go:177]   - NO_PROXY=192.168.39.99,192.168.39.65
	W1008 17:59:39.075265  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.075288  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.075301  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.075811  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076009  548894 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 17:59:39.076099  548894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 17:59:39.076150  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	W1008 17:59:39.076202  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	W1008 17:59:39.076228  548894 proxy.go:119] fail to check proxy env: Error ip not in block
	I1008 17:59:39.076306  548894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 17:59:39.076328  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 17:59:39.078554  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.078807  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079018  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079043  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079229  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079324  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:39.079350  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:39.079420  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.079542  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 17:59:39.079593  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.079786  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.079847  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 17:59:39.080000  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 17:59:39.080138  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 17:59:39.318698  548894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 17:59:39.324927  548894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 17:59:39.324990  548894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 17:59:39.343637  548894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 17:59:39.343660  548894 start.go:495] detecting cgroup driver to use...
	I1008 17:59:39.343717  548894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 17:59:39.360309  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 17:59:39.373825  548894 docker.go:217] disabling cri-docker service (if available) ...
	I1008 17:59:39.373881  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 17:59:39.387260  548894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 17:59:39.400202  548894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 17:59:39.520831  548894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 17:59:39.680675  548894 docker.go:233] disabling docker service ...
	I1008 17:59:39.680761  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 17:59:39.695394  548894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 17:59:39.710367  548894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 17:59:39.839252  548894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 17:59:39.972794  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 17:59:39.988321  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 17:59:40.006947  548894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 17:59:40.007031  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.018072  548894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 17:59:40.018137  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.029758  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.040612  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.051467  548894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 17:59:40.062960  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.074528  548894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.091933  548894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 17:59:40.101742  548894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 17:59:40.111189  548894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 17:59:40.111232  548894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 17:59:40.123431  548894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 17:59:40.132781  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:40.256434  548894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 17:59:40.349829  548894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 17:59:40.349903  548894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 17:59:40.354785  548894 start.go:563] Will wait 60s for crictl version
	I1008 17:59:40.354842  548894 ssh_runner.go:195] Run: which crictl
	I1008 17:59:40.358519  548894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 17:59:40.397714  548894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 17:59:40.397812  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.425086  548894 ssh_runner.go:195] Run: crio --version
	I1008 17:59:40.452883  548894 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 17:59:40.454244  548894 out.go:177]   - env NO_PROXY=192.168.39.99
	I1008 17:59:40.455477  548894 out.go:177]   - env NO_PROXY=192.168.39.99,192.168.39.65
	I1008 17:59:40.456757  548894 main.go:141] libmachine: (ha-094095-m03) Calling .GetIP
	I1008 17:59:40.459422  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.459818  548894 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 17:59:40.459840  548894 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 17:59:40.460096  548894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 17:59:40.464498  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:40.479877  548894 mustload.go:65] Loading cluster: ha-094095
	I1008 17:59:40.480107  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:59:40.480402  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.480441  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.495933  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I1008 17:59:40.496453  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.496925  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.496949  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.497271  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.497471  548894 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 17:59:40.499057  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:40.499430  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:40.499465  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:40.513547  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I1008 17:59:40.514005  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:40.514450  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:40.514473  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:40.514842  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:40.515015  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:40.515189  548894 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.194
	I1008 17:59:40.515202  548894 certs.go:194] generating shared ca certs ...
	I1008 17:59:40.515221  548894 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.515367  548894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 17:59:40.515423  548894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 17:59:40.515435  548894 certs.go:256] generating profile certs ...
	I1008 17:59:40.515545  548894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 17:59:40.515578  548894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d
	I1008 17:59:40.515597  548894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 17:59:40.734889  548894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d ...
	I1008 17:59:40.734923  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d: {Name:mkaac2d16400496ba6ef1c81a4206e8cf0480e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735091  548894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d ...
	I1008 17:59:40.735104  548894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d: {Name:mk3a55a29959b59f407eb97877f8ee016f652037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 17:59:40.735177  548894 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 17:59:40.735309  548894 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.5c7d776d -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 17:59:40.735433  548894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 17:59:40.735451  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 17:59:40.735464  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 17:59:40.735479  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 17:59:40.735491  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 17:59:40.735503  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 17:59:40.735514  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 17:59:40.735528  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 17:59:40.750415  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 17:59:40.750523  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 17:59:40.750564  548894 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 17:59:40.750576  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 17:59:40.750597  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 17:59:40.750620  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 17:59:40.750642  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 17:59:40.750679  548894 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 17:59:40.750709  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:40.750727  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 17:59:40.750739  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 17:59:40.750776  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:40.754187  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754657  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:40.754682  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:40.754891  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:40.755083  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:40.755214  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:40.755357  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:40.826678  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1008 17:59:40.831630  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1008 17:59:40.843594  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1008 17:59:40.848493  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1008 17:59:40.859904  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1008 17:59:40.864097  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1008 17:59:40.874362  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1008 17:59:40.878501  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1008 17:59:40.890535  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1008 17:59:40.895442  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1008 17:59:40.907886  548894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1008 17:59:40.911759  548894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1008 17:59:40.921878  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 17:59:40.947644  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 17:59:40.970914  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 17:59:40.993912  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 17:59:41.017348  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1008 17:59:41.040662  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 17:59:41.063411  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 17:59:41.086440  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 17:59:41.109681  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 17:59:41.132484  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 17:59:41.156226  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 17:59:41.178867  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1008 17:59:41.195488  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1008 17:59:41.212613  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1008 17:59:41.228807  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1008 17:59:41.246244  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1008 17:59:41.262224  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1008 17:59:41.277985  548894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1008 17:59:41.294525  548894 ssh_runner.go:195] Run: openssl version
	I1008 17:59:41.300038  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 17:59:41.311084  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315442  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.315488  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 17:59:41.321163  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 17:59:41.332088  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 17:59:41.342926  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347780  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.347833  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 17:59:41.353198  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 17:59:41.363300  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 17:59:41.373282  548894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377636  548894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.377682  548894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 17:59:41.383451  548894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 17:59:41.393738  548894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 17:59:41.397604  548894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 17:59:41.397660  548894 kubeadm.go:934] updating node {m03 192.168.39.194 8443 v1.31.1 crio true true} ...
	I1008 17:59:41.397755  548894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 17:59:41.397799  548894 kube-vip.go:115] generating kube-vip config ...
	I1008 17:59:41.397831  548894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 17:59:41.412820  548894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 17:59:41.412901  548894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 17:59:41.412955  548894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.422366  548894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1008 17:59:41.422410  548894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1008 17:59:41.431355  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1008 17:59:41.431384  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431397  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1008 17:59:41.431416  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431363  548894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1008 17:59:41.431468  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1008 17:59:41.431494  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 17:59:41.446391  548894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.446418  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1008 17:59:41.446444  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1008 17:59:41.446446  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1008 17:59:41.446463  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1008 17:59:41.447018  548894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1008 17:59:41.480884  548894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1008 17:59:41.480970  548894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1008 17:59:42.313012  548894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1008 17:59:42.322438  548894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1008 17:59:42.338702  548894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 17:59:42.365144  548894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 17:59:42.382514  548894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 17:59:42.386113  548894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 17:59:42.397995  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 17:59:42.523088  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 17:59:42.540754  548894 host.go:66] Checking if "ha-094095" exists ...
	I1008 17:59:42.541257  548894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:59:42.541326  548894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:59:42.559172  548894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I1008 17:59:42.559678  548894 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:59:42.560333  548894 main.go:141] libmachine: Using API Version  1
	I1008 17:59:42.560360  548894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:59:42.560754  548894 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:59:42.560977  548894 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 17:59:42.561148  548894 start.go:317] joinCluster: &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:59:42.561320  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1008 17:59:42.561345  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 17:59:42.564781  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565346  548894 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 17:59:42.565377  548894 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 17:59:42.565645  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 17:59:42.565831  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 17:59:42.566030  548894 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 17:59:42.566199  548894 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 17:59:42.729842  548894 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 17:59:42.729907  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443"
	I1008 18:00:04.832594  548894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jbdrzy.lmjg8wo47jhkl16a --discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-094095-m03 --control-plane --apiserver-advertise-address=192.168.39.194 --apiserver-bind-port=8443": (22.102635583s)
	I1008 18:00:04.832637  548894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1008 18:00:05.279641  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-094095-m03 minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=ha-094095 minikube.k8s.io/primary=false
	I1008 18:00:05.406989  548894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-094095-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1008 18:00:05.528741  548894 start.go:319] duration metric: took 22.967581062s to joinCluster
	I1008 18:00:05.528848  548894 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:00:05.529236  548894 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:00:05.530083  548894 out.go:177] * Verifying Kubernetes components...
	I1008 18:00:05.531162  548894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:00:05.714521  548894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:00:05.729813  548894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:00:05.730150  548894 kapi.go:59] client config for ha-094095: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.crt", KeyFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key", CAFile:"/home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1008 18:00:05.730231  548894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I1008 18:00:05.730539  548894 node_ready.go:35] waiting up to 6m0s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:05.730633  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:05.730651  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:05.730664  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:05.730673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:05.734671  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.231617  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.231641  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.231650  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.231655  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.234903  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:06.731584  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:06.731606  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:06.731615  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:06.731620  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:06.735426  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.231620  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.231630  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.231634  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.235355  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:07.730822  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:07.730855  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:07.730867  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:07.730873  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:07.735340  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:07.736449  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:08.230853  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.230878  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.230887  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.230892  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.234386  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:08.731681  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:08.731712  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:08.731722  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:08.731727  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:08.735243  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.231587  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.231609  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.231618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.231623  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.235294  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:09.731675  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:09.731700  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:09.731709  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:09.731713  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:09.735299  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.231249  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.231335  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.231353  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.231359  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.234866  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:10.235558  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:10.731835  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:10.731862  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:10.731876  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:10.731881  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:10.735185  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.231597  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.231623  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.231632  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.231636  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.235238  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:11.731791  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:11.731826  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:11.731839  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:11.731845  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:11.735179  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.231312  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.231339  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.231350  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.231356  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.234779  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:12.235754  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:12.731629  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:12.731658  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:12.731669  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:12.731673  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:12.735274  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.231468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.231492  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.231500  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.231503  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.234905  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:13.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:13.731604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:13.731613  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:13.731618  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:13.734788  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.231250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.231274  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.231282  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.231287  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.234694  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.731084  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:14.731109  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:14.731117  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:14.731121  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:14.735096  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:14.735874  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:15.231041  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.231070  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.231079  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.231083  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.234482  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:15.731250  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:15.731276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:15.731288  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:15.731296  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:15.734547  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.230897  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.230919  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.230928  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.230937  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.234261  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.731577  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:16.731599  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:16.731608  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:16.731612  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:16.735249  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:16.736046  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:17.231278  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.231302  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.231311  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.231316  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.234212  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:17.731562  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:17.731585  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:17.731594  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:17.731597  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:17.735391  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.231528  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.231552  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.231561  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.231565  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.234777  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.731570  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:18.731593  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:18.731601  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:18.731608  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:18.735359  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:18.736085  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:19.231579  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.231604  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.231618  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.231622  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.234902  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:19.731112  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:19.731142  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:19.731155  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:19.731162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:19.734221  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.231563  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.231591  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.231600  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.231605  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.234855  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:20.731738  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:20.731773  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:20.731785  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:20.731792  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:20.735486  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.231659  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.231685  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.231696  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.231705  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.234967  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:21.235427  548894 node_ready.go:53] node "ha-094095-m03" has status "Ready":"False"
	I1008 18:00:21.730803  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:21.730829  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:21.730838  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:21.730843  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:21.734021  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.231586  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.231613  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.231624  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.231630  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.234981  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:22.731022  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:22.731056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:22.731064  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:22.731070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:22.734252  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.231192  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.231215  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.231223  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.231228  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.234975  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.235794  548894 node_ready.go:49] node "ha-094095-m03" has status "Ready":"True"
	I1008 18:00:23.235816  548894 node_ready.go:38] duration metric: took 17.50525839s for node "ha-094095-m03" to be "Ready" ...
	I1008 18:00:23.235826  548894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:23.235893  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:23.235903  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.235914  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.235918  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.241231  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:23.248355  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.248435  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6c7xl
	I1008 18:00:23.248444  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.248452  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.248456  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.250946  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.251489  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.251502  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.251510  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.251515  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.253741  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.254169  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.254188  548894 pod_ready.go:82] duration metric: took 5.808287ms for pod "coredns-7c65d6cfc9-6c7xl" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254199  548894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.254280  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghz9x
	I1008 18:00:23.254291  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.254300  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.254309  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.256714  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.257261  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.257276  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.257283  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.257286  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.259498  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.260042  548894 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.260061  548894 pod_ready.go:82] duration metric: took 5.850763ms for pod "coredns-7c65d6cfc9-ghz9x" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260072  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.260132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095
	I1008 18:00:23.260143  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.260153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.260162  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.262300  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.262973  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:23.262989  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.262999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.263005  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.265000  548894 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1008 18:00:23.265522  548894 pod_ready.go:93] pod "etcd-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.265544  548894 pod_ready.go:82] duration metric: took 5.464426ms for pod "etcd-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265555  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.265622  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m02
	I1008 18:00:23.265634  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.265643  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.265648  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.267966  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.268468  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:23.268479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.268486  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.268491  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.270736  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:23.271272  548894 pod_ready.go:93] pod "etcd-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.271290  548894 pod_ready.go:82] duration metric: took 5.727216ms for pod "etcd-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.271300  548894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.431729  548894 request.go:632] Waited for 160.342792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431825  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-094095-m03
	I1008 18:00:23.431837  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.431850  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.431861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.438271  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:23.631298  548894 request.go:632] Waited for 192.164013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631383  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:23.631391  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.631408  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.631433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.635040  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:23.635580  548894 pod_ready.go:93] pod "etcd-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:23.635599  548894 pod_ready.go:82] duration metric: took 364.291447ms for pod "etcd-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.635618  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:23.831837  548894 request.go:632] Waited for 196.121278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831896  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095
	I1008 18:00:23.831902  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:23.831909  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:23.831913  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:23.834801  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.031893  548894 request.go:632] Waited for 196.106655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031976  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:24.031981  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.031989  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.031993  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.035406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.036144  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.036163  548894 pod_ready.go:82] duration metric: took 400.535944ms for pod "kube-apiserver-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.036173  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.232096  548894 request.go:632] Waited for 195.798323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232173  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m02
	I1008 18:00:24.232180  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.232192  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.232201  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.235054  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.432054  548894 request.go:632] Waited for 196.298402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432116  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:24.432121  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.432128  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.432132  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.435456  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.436205  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.436233  548894 pod_ready.go:82] duration metric: took 400.05192ms for pod "kube-apiserver-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.436253  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.631271  548894 request.go:632] Waited for 194.926969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631366  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-094095-m03
	I1008 18:00:24.631374  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.631384  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.631390  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.635001  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:24.831928  548894 request.go:632] Waited for 195.938579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832009  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:24.832015  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:24.832023  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:24.832027  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:24.834879  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:24.835519  548894 pod_ready.go:93] pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:24.835541  548894 pod_ready.go:82] duration metric: took 399.279605ms for pod "kube-apiserver-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:24.835556  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.031600  548894 request.go:632] Waited for 195.955469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031671  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095
	I1008 18:00:25.031676  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.031684  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.031689  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.035187  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.231262  548894 request.go:632] Waited for 195.293412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231320  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:25.231326  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.231339  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.231343  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.234515  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.235363  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.235391  548894 pod_ready.go:82] duration metric: took 399.824349ms for pod "kube-controller-manager-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.235422  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.431278  548894 request.go:632] Waited for 195.760337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431347  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m02
	I1008 18:00:25.431353  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.431375  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.431379  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.434406  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.631990  548894 request.go:632] Waited for 196.659604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632053  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:25.632058  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.632067  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.632070  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.635545  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:25.636227  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:25.636248  548894 pod_ready.go:82] duration metric: took 400.813116ms for pod "kube-controller-manager-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.636259  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:25.831790  548894 request.go:632] Waited for 195.428011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831873  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-094095-m03
	I1008 18:00:25.831885  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:25.831896  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:25.831903  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:25.835520  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.031847  548894 request.go:632] Waited for 195.394713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031926  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.031931  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.031939  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.031943  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.034885  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:26.035588  548894 pod_ready.go:93] pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.035611  548894 pod_ready.go:82] duration metric: took 399.345696ms for pod "kube-controller-manager-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.035622  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.231657  548894 request.go:632] Waited for 195.935325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231715  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnmch
	I1008 18:00:26.231720  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.231728  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.231732  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.234989  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.432143  548894 request.go:632] Waited for 196.401893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432242  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:26.432253  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.432262  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.432270  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.435436  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.436096  548894 pod_ready.go:93] pod "kube-proxy-gnmch" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.436113  548894 pod_ready.go:82] duration metric: took 400.484447ms for pod "kube-proxy-gnmch" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.436124  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.632222  548894 request.go:632] Waited for 196.022184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632309  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-krxss
	I1008 18:00:26.632317  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.632325  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.632332  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.636157  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.831362  548894 request.go:632] Waited for 194.278962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831419  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:26.831424  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:26.831433  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:26.831445  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:26.834670  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:26.835262  548894 pod_ready.go:93] pod "kube-proxy-krxss" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:26.835280  548894 pod_ready.go:82] duration metric: took 399.149562ms for pod "kube-proxy-krxss" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:26.835292  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.031407  548894 request.go:632] Waited for 196.014244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031471  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r55hk
	I1008 18:00:27.031479  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.031490  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.031499  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.034651  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.231683  548894 request.go:632] Waited for 196.28215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231743  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:27.231750  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.231761  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.231766  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.234677  548894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1008 18:00:27.235361  548894 pod_ready.go:93] pod "kube-proxy-r55hk" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.235391  548894 pod_ready.go:82] duration metric: took 400.091229ms for pod "kube-proxy-r55hk" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.235405  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.431237  548894 request.go:632] Waited for 195.72193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431329  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095
	I1008 18:00:27.431337  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.431353  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.431360  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.434428  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.631604  548894 request.go:632] Waited for 196.391274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631664  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095
	I1008 18:00:27.631669  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.631678  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.631683  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.635129  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:27.635990  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:27.636017  548894 pod_ready.go:82] duration metric: took 400.603779ms for pod "kube-scheduler-ha-094095" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.636029  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:27.832057  548894 request.go:632] Waited for 195.932393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832129  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m02
	I1008 18:00:27.832137  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:27.832147  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:27.832152  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:27.835638  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.031786  548894 request.go:632] Waited for 195.242001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031845  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m02
	I1008 18:00:28.031850  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.031857  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.031861  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.035281  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.035945  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.035968  548894 pod_ready.go:82] duration metric: took 399.926983ms for pod "kube-scheduler-ha-094095-m02" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.035978  548894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.232045  548894 request.go:632] Waited for 195.987112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232132  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-094095-m03
	I1008 18:00:28.232140  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.232148  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.232153  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.235683  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.431773  548894 request.go:632] Waited for 195.354282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431855  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-094095-m03
	I1008 18:00:28.431860  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.431867  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.431872  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.435214  548894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1008 18:00:28.435815  548894 pod_ready.go:93] pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace has status "Ready":"True"
	I1008 18:00:28.435951  548894 pod_ready.go:82] duration metric: took 399.956305ms for pod "kube-scheduler-ha-094095-m03" in "kube-system" namespace to be "Ready" ...
	I1008 18:00:28.435993  548894 pod_ready.go:39] duration metric: took 5.200153143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:00:28.436017  548894 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:00:28.436094  548894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:00:28.452375  548894 api_server.go:72] duration metric: took 22.923490341s to wait for apiserver process to appear ...
	I1008 18:00:28.452398  548894 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:00:28.452421  548894 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I1008 18:00:28.456918  548894 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I1008 18:00:28.456978  548894 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I1008 18:00:28.456986  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.456994  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.456999  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.457742  548894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1008 18:00:28.457798  548894 api_server.go:141] control plane version: v1.31.1
	I1008 18:00:28.457809  548894 api_server.go:131] duration metric: took 5.40508ms to wait for apiserver health ...
	I1008 18:00:28.457822  548894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:00:28.632286  548894 request.go:632] Waited for 174.373411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632364  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:28.632372  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.632382  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.632388  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.638836  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:28.647332  548894 system_pods.go:59] 24 kube-system pods found
	I1008 18:00:28.647367  548894 system_pods.go:61] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:28.647374  548894 system_pods.go:61] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:28.647379  548894 system_pods.go:61] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:28.647384  548894 system_pods.go:61] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:28.647389  548894 system_pods.go:61] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:28.647394  548894 system_pods.go:61] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:28.647399  548894 system_pods.go:61] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:28.647404  548894 system_pods.go:61] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:28.647409  548894 system_pods.go:61] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:28.647417  548894 system_pods.go:61] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:28.647426  548894 system_pods.go:61] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:28.647432  548894 system_pods.go:61] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:28.647439  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:28.647445  548894 system_pods.go:61] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:28.647451  548894 system_pods.go:61] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:28.647456  548894 system_pods.go:61] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:28.647463  548894 system_pods.go:61] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:28.647468  548894 system_pods.go:61] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:28.647476  548894 system_pods.go:61] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:28.647482  548894 system_pods.go:61] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:28.647489  548894 system_pods.go:61] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:28.647494  548894 system_pods.go:61] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:28.647499  548894 system_pods.go:61] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:28.647505  548894 system_pods.go:61] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:28.647514  548894 system_pods.go:74] duration metric: took 189.683627ms to wait for pod list to return data ...
	I1008 18:00:28.647529  548894 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:00:28.831958  548894 request.go:632] Waited for 184.329764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832044  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I1008 18:00:28.832056  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:28.832067  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:28.832073  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:28.837077  548894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1008 18:00:28.837234  548894 default_sa.go:45] found service account: "default"
	I1008 18:00:28.837253  548894 default_sa.go:55] duration metric: took 189.716305ms for default service account to be created ...
	I1008 18:00:28.837265  548894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:00:29.031904  548894 request.go:632] Waited for 194.536031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031965  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I1008 18:00:29.031970  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.031979  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.031983  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.037622  548894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1008 18:00:29.044999  548894 system_pods.go:86] 24 kube-system pods found
	I1008 18:00:29.045026  548894 system_pods.go:89] "coredns-7c65d6cfc9-6c7xl" [5be15582-d4c7-4ec3-95db-7f9b7db4280d] Running
	I1008 18:00:29.045032  548894 system_pods.go:89] "coredns-7c65d6cfc9-ghz9x" [a8c97aaa-6d1a-4c2e-b8d8-74259235cd62] Running
	I1008 18:00:29.045036  548894 system_pods.go:89] "etcd-ha-094095" [cf087edc-eae8-4667-a3e4-6352aaa887e2] Running
	I1008 18:00:29.045039  548894 system_pods.go:89] "etcd-ha-094095-m02" [66292379-6f47-4ae3-981c-02ad00e18805] Running
	I1008 18:00:29.045043  548894 system_pods.go:89] "etcd-ha-094095-m03" [bb33d95c-74ea-48b9-b844-6a9fc3f04ed9] Running
	I1008 18:00:29.045046  548894 system_pods.go:89] "kindnet-8v7s4" [baa752ea-0ebd-49ba-8480-bc0814080699] Running
	I1008 18:00:29.045050  548894 system_pods.go:89] "kindnet-f5x42" [dafc58be-bac2-4ab3-a4b6-9d13556da2cd] Running
	I1008 18:00:29.045053  548894 system_pods.go:89] "kindnet-mclfx" [fca2ce96-9193-48a5-9dc7-9d20bde6787f] Running
	I1008 18:00:29.045056  548894 system_pods.go:89] "kube-apiserver-ha-094095" [2f281e4d-ed8a-45d6-a099-075cbb2aa560] Running
	I1008 18:00:29.045059  548894 system_pods.go:89] "kube-apiserver-ha-094095-m02" [2c01151f-d734-4af8-9730-f9877482749f] Running
	I1008 18:00:29.045063  548894 system_pods.go:89] "kube-apiserver-ha-094095-m03" [ba3548a5-29fd-4a27-a698-390da97bbef9] Running
	I1008 18:00:29.045066  548894 system_pods.go:89] "kube-controller-manager-ha-094095" [f2c904d5-5440-4915-9c16-a5f1069ba353] Running
	I1008 18:00:29.045070  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m02" [4204fc98-2989-4cc8-b02e-e16e776685e3] Running
	I1008 18:00:29.045076  548894 system_pods.go:89] "kube-controller-manager-ha-094095-m03" [87802d7e-ab95-41cb-95b1-fc84aeaf8a60] Running
	I1008 18:00:29.045082  548894 system_pods.go:89] "kube-proxy-gnmch" [2e4ec0ad-049b-48e6-90b2-8b8430d821f4] Running
	I1008 18:00:29.045086  548894 system_pods.go:89] "kube-proxy-krxss" [f17e19cf-b500-45ee-b363-7eabffb840f1] Running
	I1008 18:00:29.045089  548894 system_pods.go:89] "kube-proxy-r55hk" [6bdc6056-cd19-4f52-86f7-09572a950c01] Running
	I1008 18:00:29.045093  548894 system_pods.go:89] "kube-scheduler-ha-094095" [5915aa84-5555-4e5e-9551-778ad857b2e8] Running
	I1008 18:00:29.045098  548894 system_pods.go:89] "kube-scheduler-ha-094095-m02" [8a7c0068-f374-4f84-9441-c5e50bf6da0b] Running
	I1008 18:00:29.045104  548894 system_pods.go:89] "kube-scheduler-ha-094095-m03" [ff30473f-bd61-42ad-9f94-2af37583e05c] Running
	I1008 18:00:29.045107  548894 system_pods.go:89] "kube-vip-ha-094095" [010c3fcc-b5c7-470e-8027-5c67669abf94] Running
	I1008 18:00:29.045111  548894 system_pods.go:89] "kube-vip-ha-094095-m02" [af8b5e66-b132-4e4b-b0fd-3591a0b2384e] Running
	I1008 18:00:29.045114  548894 system_pods.go:89] "kube-vip-ha-094095-m03" [c7c6fef9-ede9-403c-b9ab-1913e0821173] Running
	I1008 18:00:29.045117  548894 system_pods.go:89] "storage-provisioner" [54520f81-08fe-4612-bef9-1fe0016c45ca] Running
	I1008 18:00:29.045124  548894 system_pods.go:126] duration metric: took 207.850736ms to wait for k8s-apps to be running ...
	I1008 18:00:29.045133  548894 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:00:29.045176  548894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:00:29.059678  548894 system_svc.go:56] duration metric: took 14.536958ms WaitForService to wait for kubelet
	I1008 18:00:29.059706  548894 kubeadm.go:582] duration metric: took 23.530822988s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:00:29.059724  548894 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:00:29.231880  548894 request.go:632] Waited for 172.048672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231961  548894 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I1008 18:00:29.231966  548894 round_trippers.go:469] Request Headers:
	I1008 18:00:29.231974  548894 round_trippers.go:473]     Accept: application/json, */*
	I1008 18:00:29.231981  548894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1008 18:00:29.238241  548894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1008 18:00:29.239300  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239332  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239347  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239353  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239361  548894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:00:29.239366  548894 node_conditions.go:123] node cpu capacity is 2
	I1008 18:00:29.239371  548894 node_conditions.go:105] duration metric: took 179.642781ms to run NodePressure ...
	I1008 18:00:29.239392  548894 start.go:241] waiting for startup goroutines ...
	I1008 18:00:29.239417  548894 start.go:255] writing updated cluster config ...
	I1008 18:00:29.239708  548894 ssh_runner.go:195] Run: rm -f paused
	I1008 18:00:29.291443  548894 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:00:29.293244  548894 out.go:177] * Done! kubectl is now configured to use "ha-094095" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.930683981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410666930618491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=906ab1da-f627-4ebe-8bba-d69d14ff25db name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.931545985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c27df7bc-e40f-4852-936d-fe9589741856 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.931617344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c27df7bc-e40f-4852-936d-fe9589741856 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.931848278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c27df7bc-e40f-4852-936d-fe9589741856 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.968112607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec602b68-cb79-4cb2-a11b-7e4f1391ee45 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.968204540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec602b68-cb79-4cb2-a11b-7e4f1391ee45 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.969298795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=811198a3-ff7f-4176-b545-1815801ba679 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.969973049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410666969949839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=811198a3-ff7f-4176-b545-1815801ba679 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.970494846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf2e5cd4-dbd5-4b48-a29d-a40155b3fc4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.970563296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf2e5cd4-dbd5-4b48-a29d-a40155b3fc4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:26 ha-094095 crio[659]: time="2024-10-08 18:04:26.970969213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf2e5cd4-dbd5-4b48-a29d-a40155b3fc4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.007227509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbd8c6c0-ff5d-4b8d-88e3-8a3798459e75 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.007298912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbd8c6c0-ff5d-4b8d-88e3-8a3798459e75 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.008641928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5cba28d-5e0a-428a-b7a7-cd4c085455d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.009045704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410667009025381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5cba28d-5e0a-428a-b7a7-cd4c085455d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.009587823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb349d2b-8a8f-4315-b1b0-aa265b0d2e63 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.009664351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb349d2b-8a8f-4315-b1b0-aa265b0d2e63 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.009904547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb349d2b-8a8f-4315-b1b0-aa265b0d2e63 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.045649880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4e4f38c-2beb-4c43-86cb-528b7f54cb75 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.045791380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4e4f38c-2beb-4c43-86cb-528b7f54cb75 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.046900496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=febdc333-5f2d-4822-88ff-187b2ce72536 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.047302288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410667047281585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=febdc333-5f2d-4822-88ff-187b2ce72536 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.047826339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=debd0482-337a-4b5f-969c-942a05c1c456 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.047897609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=debd0482-337a-4b5f-969c-942a05c1c456 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:04:27 ha-094095 crio[659]: time="2024-10-08 18:04:27.048122811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4f194cdf306afbd0bf623b97a85e93c1725db4b5a6ba6b933a57f245004a603,PodSandboxId:eaf6acce4786e8a6382ec9ad0c306034e30514469c0fedb9853a0ccca6741b52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728410434080989304,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-n779r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3a10d4a-6add-4642-961b-b7b00f9e363b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee,PodSandboxId:875cfacbeeb23201c6c36497a685664419b120403e29f2f38b42fd109b628897,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297648632506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6c7xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5be15582-d4c7-4ec3-95db-7f9b7db4280d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02,PodSandboxId:9d8f70dc17585b7eb344aa16fc0e16d59a949abccc21d74743406cfebe39a34b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728410297599818148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghz9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a8c97aaa-6d1a-4c2e-b8d8-74259235cd62,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3,PodSandboxId:d884b794bcbf861c4f46634be5faf2397ea156fe7562c77953821df012c368fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728410297545274983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54520f81-08fe-4612-bef9-1fe0016c45ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a,PodSandboxId:c791fa497b85ac435aad59433589c92f9a02514e94dae3da5525477f069e324e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17284102
85248917071,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mclfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca2ce96-9193-48a5-9dc7-9d20bde6787f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034,PodSandboxId:29ed3e17d1aabd1f6f3e5cdf4c735f69be9765b8c5767b7b01b9c75638d9d803,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728410285075550917,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnmch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e4ec0ad-049b-48e6-90b2-8b8430d821f4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d,PodSandboxId:13853a6e388f1bf625424e6a43d282b29d0edcf55a43e4a550f95be689865542,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728410275444847191,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19b7e8dee4daa510f3f23034617cd71c,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7,PodSandboxId:b68b365f16def61fdb163d7bfb03d56e2eeda34390592db3f6d1fca104a9f14d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728410273810961564,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ef4792d58f06f8319e0939993449f9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b,PodSandboxId:1c13c52688447e1457bf277b4fe1fb87894c0f251d005e834c8dabd27713df92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728410273795701718,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
ha-094095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab63a85f4abc9ded81a3460d92ef212,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20,PodSandboxId:f021979b9e57f9b85a8710325321a6bb7ca7a2d5c9adb9f874b7b6b5d67221d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728410273785506018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-094095,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 2762c7155c0d46d981fd81220017a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb,PodSandboxId:a2f40f00bb5ffbfbcbac697446e4bab8e8fed7f163b3806dc0c1381535c744b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728410273717756178,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-094095,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f977c77bded84c5cd8640a7d7c6034,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=debd0482-337a-4b5f-969c-942a05c1c456 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4f194cdf306a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   eaf6acce4786e       busybox-7dff88458-n779r
	079e7a8fee78f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   875cfacbeeb23       coredns-7c65d6cfc9-6c7xl
	1eb4935d542c2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   9d8f70dc17585       coredns-7c65d6cfc9-ghz9x
	dfdfc8735b822       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d884b794bcbf8       storage-provisioner
	17a4523dfe3c8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   c791fa497b85a       kindnet-mclfx
	347854044c294       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   29ed3e17d1aab       kube-proxy-gnmch
	8f117035b9a9a       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   13853a6e388f1       kube-vip-ha-094095
	9c418725a44b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b68b365f16def       etcd-ha-094095
	3b8241e00230e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c13c52688447       kube-apiserver-ha-094095
	0224d96e8ab1a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f021979b9e57f       kube-scheduler-ha-094095
	ec97e876ef66b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   a2f40f00bb5ff       kube-controller-manager-ha-094095
	
	
	==> coredns [079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee] <==
	[INFO] 10.244.1.2:46939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173909s
	[INFO] 10.244.1.2:43197 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152065s
	[INFO] 10.244.0.4:54276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001776636s
	[INFO] 10.244.0.4:42844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001027134s
	[INFO] 10.244.0.4:33552 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087486s
	[INFO] 10.244.0.4:40894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128456s
	[INFO] 10.244.2.2:37156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090694s
	[INFO] 10.244.2.2:35975 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000342501s
	[INFO] 10.244.2.2:56819 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008022s
	[INFO] 10.244.2.2:40613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107574s
	[INFO] 10.244.1.2:38959 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208641s
	[INFO] 10.244.0.4:58386 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011149s
	[INFO] 10.244.0.4:56827 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016311s
	[INFO] 10.244.0.4:52547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068216s
	[INFO] 10.244.0.4:59149 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077593s
	[INFO] 10.244.2.2:49444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156535s
	[INFO] 10.244.2.2:51787 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111699s
	[INFO] 10.244.2.2:52768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107964s
	[INFO] 10.244.2.2:53538 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071551s
	[INFO] 10.244.1.2:52231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220976s
	[INFO] 10.244.0.4:45893 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145642s
	[INFO] 10.244.0.4:50564 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012308s
	[INFO] 10.244.0.4:40912 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110407s
	[INFO] 10.244.2.2:48559 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182361s
	[INFO] 10.244.2.2:42189 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123843s
	
	
	==> coredns [1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02] <==
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000403051s
	[INFO] 10.244.2.2:33432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198542s
	[INFO] 10.244.2.2:43175 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00011602s
	[INFO] 10.244.2.2:39986 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00007233s
	[INFO] 10.244.2.2:43098 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001798194s
	[INFO] 10.244.1.2:51904 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006238586s
	[INFO] 10.244.1.2:39841 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000245332s
	[INFO] 10.244.1.2:52002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010411466s
	[INFO] 10.244.0.4:36134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131817s
	[INFO] 10.244.0.4:60392 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136485s
	[INFO] 10.244.0.4:47750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001276s
	[INFO] 10.244.0.4:53066 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112589s
	[INFO] 10.244.2.2:50951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171312s
	[INFO] 10.244.2.2:36151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001719697s
	[INFO] 10.244.2.2:59876 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00134295s
	[INFO] 10.244.2.2:34156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121408s
	[INFO] 10.244.1.2:40835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210172s
	[INFO] 10.244.1.2:35561 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210453s
	[INFO] 10.244.1.2:58285 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:57787 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000236305s
	[INFO] 10.244.1.2:52947 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185701s
	[INFO] 10.244.1.2:38121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000200581s
	[INFO] 10.244.0.4:37934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195898s
	[INFO] 10.244.2.2:51605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210836s
	[INFO] 10.244.2.2:44666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117181s
	
	
	==> describe nodes <==
	Name:               ha-094095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:57:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 17:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-094095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f253fb8c294514826ad247cbfc784d
	  System UUID:                14f253fb-8c29-4514-826a-d247cbfc784d
	  Boot ID:                    6cdd0146-42c4-4814-93e6-3af5699e77ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-n779r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 coredns-7c65d6cfc9-6c7xl             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-ghz9x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-094095                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-mclfx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-094095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-094095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-gnmch                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-094095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-094095                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m21s  kube-proxy       
	  Normal  Starting                 6m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-094095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-094095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-094095 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-094095 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	  Normal  RegisteredNode           4m18s  node-controller  Node ha-094095 event: Registered Node ha-094095 in Controller
	
	
	Name:               ha-094095-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T17_58_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 17:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:01:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 08 Oct 2024 18:00:56 +0000   Tue, 08 Oct 2024 18:02:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    ha-094095-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6846904a528149b4bec4ab05607145f5
	  System UUID:                6846904a-5281-49b4-bec4-ab05607145f5
	  Boot ID:                    92a2dec0-2bc9-44db-94e9-e4a68690b144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxdk6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-094095-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-f5x42                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-094095-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-094095-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-r55hk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-094095-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-vip-ha-094095-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m35s (x8 over 5m35s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s (x8 over 5m35s)  kubelet          Node ha-094095-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s (x7 over 5m35s)  kubelet          Node ha-094095-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-094095-m02 event: Registered Node ha-094095-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-094095-m02 status is now: NodeNotReady
	
	
	Name:               ha-094095-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_00_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:03 +0000   Tue, 08 Oct 2024 18:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    ha-094095-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cca5410c10d94705a0a750a2a36dfcf7
	  System UUID:                cca5410c-10d9-4705-a0a7-50a2a36dfcf7
	  Boot ID:                    a52600ea-f5af-4184-95ce-18bc5a4ff10e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rxwcg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-094095-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-8v7s4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m25s
	  kube-system                 kube-apiserver-ha-094095-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-094095-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-krxss                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-ha-094095-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-vip-ha-094095-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-094095-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-094095-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-094095-m03 event: Registered Node ha-094095-m03 in Controller
	
	
	Name:               ha-094095-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-094095-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=ha-094095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_08T18_01_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:01:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-094095-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:04:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:01:40 +0000   Tue, 08 Oct 2024 18:01:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ha-094095-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6fe409be99242ac858632e59843d080
	  System UUID:                c6fe409b-e992-42ac-8586-32e59843d080
	  Boot ID:                    10df0150-6a8d-4d3e-8551-af1fe0638414
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jhqlp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m18s
	  kube-system                 kube-proxy-jjgsh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m12s                  kube-proxy       
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m18s (x2 over 3m18s)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x2 over 3m18s)  kubelet          Node ha-094095-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x2 over 3m18s)  kubelet          Node ha-094095-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-094095-m04 event: Registered Node ha-094095-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-094095-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 17:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050015] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039380] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.822235] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417178] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.589695] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.867596] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.064259] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063997] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.185531] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.116355] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.250177] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.801506] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.578485] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.057293] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.117363] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.084526] kauditd_printk_skb: 79 callbacks suppressed
	[Oct 8 17:58] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.243247] kauditd_printk_skb: 28 callbacks suppressed
	[ +42.891327] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7] <==
	{"level":"warn","ts":"2024-10-08T18:04:27.019146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.112027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.114577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.119651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.218911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.278704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.287363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.290886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.298464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.304111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.310713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.313847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.316907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.319418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.324579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.330174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.335945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.340362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.343773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.350114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.356067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.361585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.365313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.367958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-08T18:04:27.371834Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"a7c10c98480f83f3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:04:27 up 7 min,  0 users,  load average: 0.36, 0.38, 0.19
	Linux ha-094095 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a] <==
	I1008 18:03:56.531040       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:06.521023       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:06.521156       1 main.go:299] handling current node
	I1008 18:04:06.521246       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:06.521314       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:06.521746       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:06.521831       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:06.522370       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:06.522563       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:16.529710       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:16.529904       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:16.530111       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:16.530143       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:16.530205       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:16.530224       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	I1008 18:04:16.530303       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:16.530322       1 main.go:299] handling current node
	I1008 18:04:26.529904       1 main.go:295] Handling node with IPs: map[192.168.39.99:{}]
	I1008 18:04:26.529982       1 main.go:299] handling current node
	I1008 18:04:26.530003       1 main.go:295] Handling node with IPs: map[192.168.39.65:{}]
	I1008 18:04:26.530016       1 main.go:322] Node ha-094095-m02 has CIDR [10.244.1.0/24] 
	I1008 18:04:26.530169       1 main.go:295] Handling node with IPs: map[192.168.39.194:{}]
	I1008 18:04:26.530222       1 main.go:322] Node ha-094095-m03 has CIDR [10.244.2.0/24] 
	I1008 18:04:26.530474       1 main.go:295] Handling node with IPs: map[192.168.39.33:{}]
	I1008 18:04:26.530493       1 main.go:322] Node ha-094095-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b] <==
	I1008 17:57:58.485779       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 17:57:58.491495       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.99]
	I1008 17:57:58.492135       1 controller.go:615] quota admission added evaluator for: endpoints
	I1008 17:57:58.499200       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 17:57:58.903637       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 17:58:00.054350       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 17:58:00.074068       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 17:58:00.230930       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 17:58:03.854509       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1008 17:58:03.954697       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1008 18:00:38.037771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45714: use of closed network connection
	E1008 18:00:38.232043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45744: use of closed network connection
	E1008 18:00:38.418256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45748: use of closed network connection
	E1008 18:00:38.622516       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45768: use of closed network connection
	E1008 18:00:38.796785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45788: use of closed network connection
	E1008 18:00:38.988513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45812: use of closed network connection
	E1008 18:00:39.174560       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45828: use of closed network connection
	E1008 18:00:39.350317       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45850: use of closed network connection
	E1008 18:00:39.525813       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45854: use of closed network connection
	E1008 18:00:39.828048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49850: use of closed network connection
	E1008 18:00:40.000068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49874: use of closed network connection
	E1008 18:00:40.192753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49888: use of closed network connection
	E1008 18:00:40.379456       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49904: use of closed network connection
	E1008 18:00:40.562970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49918: use of closed network connection
	E1008 18:00:40.742948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49938: use of closed network connection
	
	
	==> kube-controller-manager [ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb] <==
	I1008 18:01:09.767306       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-094095-m04" podCIDRs=["10.244.3.0/24"]
	I1008 18:01:09.767482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:09.767767       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.015142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.174634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:10.537159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.265250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:12.321671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:13.716760       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-094095-m04"
	I1008 18:01:13.777151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:20.033294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108541       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.108639       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:01:28.124876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:28.732886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:01:40.603842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m04"
	I1008 18:02:28.755242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.757889       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-094095-m04"
	I1008 18:02:28.778675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:28.891800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.567817ms"
	I1008 18:02:28.891887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.019µs"
	I1008 18:02:30.013028       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	I1008 18:02:33.959772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-094095-m02"
	
	
	==> kube-proxy [347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 17:58:05.534485       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 17:58:05.568766       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	E1008 17:58:05.568940       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 17:58:05.609153       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 17:58:05.609181       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 17:58:05.609201       1 server_linux.go:169] "Using iptables Proxier"
	I1008 17:58:05.612762       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 17:58:05.613968       1 server.go:483] "Version info" version="v1.31.1"
	I1008 17:58:05.614042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 17:58:05.616792       1 config.go:199] "Starting service config controller"
	I1008 17:58:05.617139       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 17:58:05.617374       1 config.go:105] "Starting endpoint slice config controller"
	I1008 17:58:05.617451       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 17:58:05.618851       1 config.go:328] "Starting node config controller"
	I1008 17:58:05.619090       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 17:58:05.718484       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 17:58:05.718497       1 shared_informer.go:320] Caches are synced for service config
	I1008 17:58:05.720100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20] <==
	E1008 18:00:30.199446       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rzflt" node="ha-094095-m03"
	E1008 18:00:30.199562       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e0ead4a-bdd7-4fe2-8070-a2e4680f7988(default/busybox-7dff88458-rzflt) was assumed on ha-094095-m03 but assigned to ha-094095-m02" pod="default/busybox-7dff88458-rzflt"
	E1008 18:00:30.201601       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rzflt\": pod busybox-7dff88458-rzflt is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-rzflt"
	I1008 18:00:30.201672       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rzflt" node="ha-094095-m02"
	E1008 18:00:30.241278       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.243855       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 00074fc5-40f9-403b-9cec-3f333b177d47(default/busybox-7dff88458-2hz9n) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2hz9n"
	E1008 18:00:30.248134       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2hz9n\": pod busybox-7dff88458-2hz9n is already assigned to node \"ha-094095-m02\"" pod="default/busybox-7dff88458-2hz9n"
	I1008 18:00:30.248955       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2hz9n" node="ha-094095-m02"
	E1008 18:00:30.302814       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.303201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 399813b8-6199-4631-af76-66e7e8bf4b8c(default/busybox-7dff88458-rxwcg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rxwcg"
	E1008 18:00:30.303327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rxwcg\": pod busybox-7dff88458-rxwcg is already assigned to node \"ha-094095-m03\"" pod="default/busybox-7dff88458-rxwcg"
	I1008 18:00:30.303461       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rxwcg" node="ha-094095-m03"
	E1008 18:00:30.454050       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-l6wvv\" not found" pod="default/busybox-7dff88458-l6wvv"
	E1008 18:01:09.806729       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.806888       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c9b872af-5075-4c26-99cf-282b077912ee(kube-system/kube-proxy-jjgsh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jjgsh"
	E1008 18:01:09.806916       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jjgsh\": pod kube-proxy-jjgsh is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-jjgsh"
	I1008 18:01:09.806962       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jjgsh" node="ha-094095-m04"
	E1008 18:01:09.807512       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.807581       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2f9978f0-fb58-41fb-ac79-c07ec22f8b12(kube-system/kindnet-jhqlp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jhqlp"
	E1008 18:01:09.807603       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jhqlp\": pod kindnet-jhqlp is already assigned to node \"ha-094095-m04\"" pod="kube-system/kindnet-jhqlp"
	I1008 18:01:09.807627       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jhqlp" node="ha-094095-m04"
	E1008 18:01:09.868191       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	E1008 18:01:09.869875       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6257090e-676b-45ea-9261-104b1ba829f3(kube-system/kube-proxy-x5wf6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-x5wf6"
	E1008 18:01:09.871281       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-x5wf6\": pod kube-proxy-x5wf6 is already assigned to node \"ha-094095-m04\"" pod="kube-system/kube-proxy-x5wf6"
	I1008 18:01:09.871556       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-x5wf6" node="ha-094095-m04"
	
	
	==> kubelet <==
	Oct 08 18:03:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:03:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293753    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:00 ha-094095 kubelet[1309]: E1008 18:03:00.293782    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410580293521661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295059    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:10 ha-094095 kubelet[1309]: E1008 18:03:10.295735    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410590294685199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297939    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:20 ha-094095 kubelet[1309]: E1008 18:03:20.297984    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410600297585069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300086    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:30 ha-094095 kubelet[1309]: E1008 18:03:30.300349    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410610299745153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302156    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:40 ha-094095 kubelet[1309]: E1008 18:03:40.302530    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410620301770572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304820    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:03:50 ha-094095 kubelet[1309]: E1008 18:03:50.304911    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410630304349593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.254307    1309 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 18:04:00 ha-094095 kubelet[1309]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 18:04:00 ha-094095 kubelet[1309]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307018    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:00 ha-094095 kubelet[1309]: E1008 18:04:00.307069    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410640306622343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:10 ha-094095 kubelet[1309]: E1008 18:04:10.309307    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410650308966284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:10 ha-094095 kubelet[1309]: E1008 18:04:10.309339    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410650308966284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:20 ha-094095 kubelet[1309]: E1008 18:04:20.311278    1309 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660310643006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:04:20 ha-094095 kubelet[1309]: E1008 18:04:20.311350    1309 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728410660310643006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (797.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-094095 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-094095 -v=7 --alsologtostderr
E1008 18:05:51.764845  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-094095 -v=7 --alsologtostderr: exit status 82 (2m1.864787692s)

                                                
                                                
-- stdout --
	* Stopping node "ha-094095-m04"  ...
	* Stopping node "ha-094095-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:04:28.438815  554096 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:04:28.439064  554096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:04:28.439075  554096 out.go:358] Setting ErrFile to fd 2...
	I1008 18:04:28.439079  554096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:04:28.439241  554096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:04:28.439450  554096 out.go:352] Setting JSON to false
	I1008 18:04:28.439542  554096 mustload.go:65] Loading cluster: ha-094095
	I1008 18:04:28.439934  554096 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:04:28.440013  554096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:04:28.440190  554096 mustload.go:65] Loading cluster: ha-094095
	I1008 18:04:28.440313  554096 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:04:28.440357  554096 stop.go:39] StopHost: ha-094095-m04
	I1008 18:04:28.440722  554096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:04:28.440770  554096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:04:28.456965  554096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38627
	I1008 18:04:28.457497  554096 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:04:28.458091  554096 main.go:141] libmachine: Using API Version  1
	I1008 18:04:28.458116  554096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:04:28.458495  554096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:04:28.460714  554096 out.go:177] * Stopping node "ha-094095-m04"  ...
	I1008 18:04:28.461878  554096 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 18:04:28.461913  554096 main.go:141] libmachine: (ha-094095-m04) Calling .DriverName
	I1008 18:04:28.462113  554096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 18:04:28.462132  554096 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHHostname
	I1008 18:04:28.464769  554096 main.go:141] libmachine: (ha-094095-m04) DBG | domain ha-094095-m04 has defined MAC address 52:54:00:f3:7e:d4 in network mk-ha-094095
	I1008 18:04:28.465177  554096 main.go:141] libmachine: (ha-094095-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:7e:d4", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 19:00:56 +0000 UTC Type:0 Mac:52:54:00:f3:7e:d4 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-094095-m04 Clientid:01:52:54:00:f3:7e:d4}
	I1008 18:04:28.465197  554096 main.go:141] libmachine: (ha-094095-m04) DBG | domain ha-094095-m04 has defined IP address 192.168.39.33 and MAC address 52:54:00:f3:7e:d4 in network mk-ha-094095
	I1008 18:04:28.465350  554096 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHPort
	I1008 18:04:28.465529  554096 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHKeyPath
	I1008 18:04:28.465700  554096 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHUsername
	I1008 18:04:28.465860  554096 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m04/id_rsa Username:docker}
	I1008 18:04:28.555302  554096 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 18:04:28.608949  554096 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 18:04:28.661806  554096 main.go:141] libmachine: Stopping "ha-094095-m04"...
	I1008 18:04:28.661857  554096 main.go:141] libmachine: (ha-094095-m04) Calling .GetState
	I1008 18:04:28.663556  554096 main.go:141] libmachine: (ha-094095-m04) Calling .Stop
	I1008 18:04:28.667189  554096 main.go:141] libmachine: (ha-094095-m04) Waiting for machine to stop 0/120
	I1008 18:04:29.841895  554096 main.go:141] libmachine: (ha-094095-m04) Calling .GetState
	I1008 18:04:29.843280  554096 main.go:141] libmachine: Machine "ha-094095-m04" was stopped.
	I1008 18:04:29.843298  554096 stop.go:75] duration metric: took 1.381421387s to stop
	I1008 18:04:29.843345  554096 stop.go:39] StopHost: ha-094095-m03
	I1008 18:04:29.843687  554096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:04:29.843738  554096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:04:29.859067  554096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45013
	I1008 18:04:29.859450  554096 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:04:29.859924  554096 main.go:141] libmachine: Using API Version  1
	I1008 18:04:29.859949  554096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:04:29.860251  554096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:04:29.862874  554096 out.go:177] * Stopping node "ha-094095-m03"  ...
	I1008 18:04:29.863924  554096 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 18:04:29.863953  554096 main.go:141] libmachine: (ha-094095-m03) Calling .DriverName
	I1008 18:04:29.864154  554096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 18:04:29.864177  554096 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHHostname
	I1008 18:04:29.866992  554096 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 18:04:29.867415  554096 main.go:141] libmachine: (ha-094095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:8f:e3", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:59:29 +0000 UTC Type:0 Mac:52:54:00:e6:8f:e3 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-094095-m03 Clientid:01:52:54:00:e6:8f:e3}
	I1008 18:04:29.867472  554096 main.go:141] libmachine: (ha-094095-m03) DBG | domain ha-094095-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:e6:8f:e3 in network mk-ha-094095
	I1008 18:04:29.867567  554096 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHPort
	I1008 18:04:29.867745  554096 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHKeyPath
	I1008 18:04:29.867904  554096 main.go:141] libmachine: (ha-094095-m03) Calling .GetSSHUsername
	I1008 18:04:29.868000  554096 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m03/id_rsa Username:docker}
	I1008 18:04:29.955908  554096 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 18:04:30.009414  554096 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 18:04:30.062980  554096 main.go:141] libmachine: Stopping "ha-094095-m03"...
	I1008 18:04:30.063006  554096 main.go:141] libmachine: (ha-094095-m03) Calling .GetState
	I1008 18:04:30.064386  554096 main.go:141] libmachine: (ha-094095-m03) Calling .Stop
	I1008 18:04:30.067761  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 0/120
	I1008 18:04:31.069121  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 1/120
	I1008 18:04:32.070428  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 2/120
	I1008 18:04:33.071731  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 3/120
	I1008 18:04:34.073103  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 4/120
	I1008 18:04:35.075158  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 5/120
	I1008 18:04:36.077033  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 6/120
	I1008 18:04:37.078441  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 7/120
	I1008 18:04:38.079948  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 8/120
	I1008 18:04:39.081391  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 9/120
	I1008 18:04:40.083321  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 10/120
	I1008 18:04:41.084750  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 11/120
	I1008 18:04:42.086379  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 12/120
	I1008 18:04:43.087797  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 13/120
	I1008 18:04:44.089641  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 14/120
	I1008 18:04:45.091385  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 15/120
	I1008 18:04:46.093057  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 16/120
	I1008 18:04:47.094457  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 17/120
	I1008 18:04:48.095835  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 18/120
	I1008 18:04:49.097115  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 19/120
	I1008 18:04:50.098563  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 20/120
	I1008 18:04:51.100381  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 21/120
	I1008 18:04:52.102924  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 22/120
	I1008 18:04:53.104777  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 23/120
	I1008 18:04:54.106462  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 24/120
	I1008 18:04:55.108667  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 25/120
	I1008 18:04:56.110205  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 26/120
	I1008 18:04:57.111720  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 27/120
	I1008 18:04:58.113184  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 28/120
	I1008 18:04:59.114569  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 29/120
	I1008 18:05:00.116082  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 30/120
	I1008 18:05:01.117281  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 31/120
	I1008 18:05:02.118851  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 32/120
	I1008 18:05:03.120043  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 33/120
	I1008 18:05:04.121592  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 34/120
	I1008 18:05:05.123529  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 35/120
	I1008 18:05:06.124912  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 36/120
	I1008 18:05:07.126351  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 37/120
	I1008 18:05:08.127615  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 38/120
	I1008 18:05:09.128935  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 39/120
	I1008 18:05:10.130810  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 40/120
	I1008 18:05:11.132288  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 41/120
	I1008 18:05:12.133637  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 42/120
	I1008 18:05:13.134748  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 43/120
	I1008 18:05:14.136789  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 44/120
	I1008 18:05:15.138427  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 45/120
	I1008 18:05:16.139740  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 46/120
	I1008 18:05:17.140922  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 47/120
	I1008 18:05:18.142399  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 48/120
	I1008 18:05:19.144166  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 49/120
	I1008 18:05:20.146230  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 50/120
	I1008 18:05:21.147997  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 51/120
	I1008 18:05:22.149157  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 52/120
	I1008 18:05:23.150524  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 53/120
	I1008 18:05:24.151949  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 54/120
	I1008 18:05:25.153414  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 55/120
	I1008 18:05:26.154821  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 56/120
	I1008 18:05:27.156095  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 57/120
	I1008 18:05:28.157501  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 58/120
	I1008 18:05:29.158805  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 59/120
	I1008 18:05:30.160573  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 60/120
	I1008 18:05:31.161975  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 61/120
	I1008 18:05:32.163258  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 62/120
	I1008 18:05:33.164943  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 63/120
	I1008 18:05:34.166331  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 64/120
	I1008 18:05:35.167998  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 65/120
	I1008 18:05:36.169240  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 66/120
	I1008 18:05:37.170521  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 67/120
	I1008 18:05:38.172778  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 68/120
	I1008 18:05:39.173965  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 69/120
	I1008 18:05:40.175682  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 70/120
	I1008 18:05:41.177058  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 71/120
	I1008 18:05:42.178303  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 72/120
	I1008 18:05:43.179632  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 73/120
	I1008 18:05:44.180829  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 74/120
	I1008 18:05:45.182464  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 75/120
	I1008 18:05:46.183718  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 76/120
	I1008 18:05:47.185059  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 77/120
	I1008 18:05:48.186696  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 78/120
	I1008 18:05:49.187875  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 79/120
	I1008 18:05:50.189475  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 80/120
	I1008 18:05:51.190816  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 81/120
	I1008 18:05:52.192076  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 82/120
	I1008 18:05:53.193447  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 83/120
	I1008 18:05:54.194798  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 84/120
	I1008 18:05:55.196663  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 85/120
	I1008 18:05:56.197952  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 86/120
	I1008 18:05:57.199449  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 87/120
	I1008 18:05:58.200813  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 88/120
	I1008 18:05:59.202355  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 89/120
	I1008 18:06:00.203698  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 90/120
	I1008 18:06:01.204996  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 91/120
	I1008 18:06:02.206293  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 92/120
	I1008 18:06:03.207573  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 93/120
	I1008 18:06:04.208920  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 94/120
	I1008 18:06:05.210618  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 95/120
	I1008 18:06:06.212006  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 96/120
	I1008 18:06:07.213635  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 97/120
	I1008 18:06:08.215095  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 98/120
	I1008 18:06:09.216591  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 99/120
	I1008 18:06:10.218236  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 100/120
	I1008 18:06:11.219762  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 101/120
	I1008 18:06:12.221244  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 102/120
	I1008 18:06:13.222523  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 103/120
	I1008 18:06:14.223963  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 104/120
	I1008 18:06:15.225756  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 105/120
	I1008 18:06:16.227097  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 106/120
	I1008 18:06:17.228821  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 107/120
	I1008 18:06:18.230509  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 108/120
	I1008 18:06:19.231918  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 109/120
	I1008 18:06:20.233837  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 110/120
	I1008 18:06:21.235184  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 111/120
	I1008 18:06:22.236529  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 112/120
	I1008 18:06:23.237809  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 113/120
	I1008 18:06:24.239094  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 114/120
	I1008 18:06:25.241299  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 115/120
	I1008 18:06:26.243445  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 116/120
	I1008 18:06:27.244915  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 117/120
	I1008 18:06:28.246182  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 118/120
	I1008 18:06:29.247689  554096 main.go:141] libmachine: (ha-094095-m03) Waiting for machine to stop 119/120
	I1008 18:06:30.248208  554096 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1008 18:06:30.248278  554096 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1008 18:06:30.250227  554096 out.go:201] 
	W1008 18:06:30.251298  554096 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1008 18:06:30.251318  554096 out.go:270] * 
	* 
	W1008 18:06:30.254655  554096 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 18:06:30.255973  554096 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-094095 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-094095 --wait=true -v=7 --alsologtostderr
E1008 18:06:38.897315  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:07:06.597084  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:10:51.764485  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:11:38.895432  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:12:14.828903  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:15:51.764508  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:16:38.895422  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-094095 --wait=true -v=7 --alsologtostderr: exit status 80 (11m10.108226017s)

                                                
                                                
-- stdout --
	* [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	* Updating the running kvm2 "ha-094095" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-094095-m02" control-plane node in "ha-094095" cluster
	* Restarting existing kvm2 VM for "ha-094095-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.99
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.99
	* Verifying Kubernetes components...
	
	* Starting "ha-094095-m03" control-plane node in "ha-094095" cluster
	* Restarting existing kvm2 VM for "ha-094095-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.99,192.168.39.65
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.99
	  - env NO_PROXY=192.168.39.99,192.168.39.65
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:06:30.309137  554606 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:06:30.309278  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309286  554606 out.go:358] Setting ErrFile to fd 2...
	I1008 18:06:30.309292  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309514  554606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:06:30.310048  554606 out.go:352] Setting JSON to false
	I1008 18:06:30.311177  554606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6542,"bootTime":1728404248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:06:30.311239  554606 start.go:139] virtualization: kvm guest
	I1008 18:06:30.314064  554606 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:06:30.315343  554606 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:06:30.315380  554606 notify.go:220] Checking for updates...
	I1008 18:06:30.317931  554606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:06:30.319349  554606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:06:30.320487  554606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:06:30.321485  554606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:06:30.322477  554606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:06:30.323977  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:30.324106  554606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:06:30.324624  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.324671  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.339874  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1008 18:06:30.340381  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.341072  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.341127  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.341483  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.341654  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.375512  554606 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:06:30.376454  554606 start.go:297] selected driver: kvm2
	I1008 18:06:30.376466  554606 start.go:901] validating driver "kvm2" against &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.376624  554606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:06:30.376959  554606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.377044  554606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:06:30.391484  554606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:06:30.392523  554606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:06:30.392590  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:06:30.392666  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:06:30.392787  554606 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.393008  554606 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.394646  554606 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 18:06:30.395834  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:06:30.395871  554606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:06:30.395884  554606 cache.go:56] Caching tarball of preloaded images
	I1008 18:06:30.395977  554606 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:06:30.395992  554606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:06:30.396098  554606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:06:30.396294  554606 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:06:30.396336  554606 start.go:364] duration metric: took 25.244µs to acquireMachinesLock for "ha-094095"
	I1008 18:06:30.396355  554606 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:06:30.396364  554606 fix.go:54] fixHost starting: 
	I1008 18:06:30.396631  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.396667  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.410133  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I1008 18:06:30.410601  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.411054  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.411079  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.411411  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.411582  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.411739  554606 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 18:06:30.413026  554606 fix.go:112] recreateIfNeeded on ha-094095: state=Running err=<nil>
	W1008 18:06:30.413058  554606 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:06:30.414579  554606 out.go:177] * Updating the running kvm2 "ha-094095" VM ...
	I1008 18:06:30.415651  554606 machine.go:93] provisionDockerMachine start ...
	I1008 18:06:30.415671  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.415848  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.418450  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.418937  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.418961  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.419103  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.419284  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419446  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419606  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.419778  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.420056  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.420074  554606 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:06:30.527850  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.527883  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528141  554606 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 18:06:30.528169  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528335  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.530991  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531397  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.531419  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531520  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.531702  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.531851  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.532037  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.532201  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.532384  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.532397  554606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 18:06:30.657746  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.657776  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.660255  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660584  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.660613  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660854  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.661042  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661234  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661339  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.661486  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.661678  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.661694  554606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:06:30.771861  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:06:30.771897  554606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:06:30.771927  554606 buildroot.go:174] setting up certificates
	I1008 18:06:30.771935  554606 provision.go:84] configureAuth start
	I1008 18:06:30.771945  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.772190  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:06:30.774789  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775138  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.775159  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775238  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.777464  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777796  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.777820  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777940  554606 provision.go:143] copyHostCerts
	I1008 18:06:30.777975  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778033  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:06:30.778044  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778108  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:06:30.778196  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778213  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:06:30.778219  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778243  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:06:30.778299  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778314  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:06:30.778342  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778371  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:06:30.778444  554606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 18:06:30.957867  554606 provision.go:177] copyRemoteCerts
	I1008 18:06:30.957933  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:06:30.957968  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.960618  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.960989  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.961015  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.961231  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.961399  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.961567  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.961712  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:06:31.044849  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:06:31.044943  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:06:31.071107  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:06:31.071180  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 18:06:31.095762  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:06:31.095846  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:06:31.121110  554606 provision.go:87] duration metric: took 349.162437ms to configureAuth
	I1008 18:06:31.121135  554606 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:06:31.121372  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:31.121456  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:31.124338  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124715  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:31.124743  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124960  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:31.125168  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125328  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125469  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:31.125643  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:31.125857  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:31.125872  554606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:08:01.946716  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:08:01.946753  554606 machine.go:96] duration metric: took 1m31.531085514s to provisionDockerMachine
	I1008 18:08:01.946788  554606 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 18:08:01.946804  554606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:08:01.946874  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:01.947275  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:08:01.947304  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:01.950626  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951103  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:01.951131  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951290  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:01.951497  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:01.951639  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:01.951781  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.033385  554606 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:08:02.037411  554606 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:08:02.037435  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:08:02.037506  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:08:02.037603  554606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:08:02.037613  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:08:02.037727  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:08:02.046918  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:02.069405  554606 start.go:296] duration metric: took 122.60226ms for postStartSetup
	I1008 18:08:02.069448  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.069754  554606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1008 18:08:02.069786  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.072518  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072838  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.072865  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072992  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.073180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.073331  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.073508  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	W1008 18:08:02.152610  554606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1008 18:08:02.152641  554606 fix.go:56] duration metric: took 1m31.756277865s for fixHost
	I1008 18:08:02.152667  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.155151  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155507  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.155533  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155699  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.155924  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156085  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.156317  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:08:02.156548  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:08:02.156560  554606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:08:02.258737  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410882.206938077
	
	I1008 18:08:02.258770  554606 fix.go:216] guest clock: 1728410882.206938077
	I1008 18:08:02.258778  554606 fix.go:229] Guest: 2024-10-08 18:08:02.206938077 +0000 UTC Remote: 2024-10-08 18:08:02.152649244 +0000 UTC m=+91.884799909 (delta=54.288833ms)
	I1008 18:08:02.258799  554606 fix.go:200] guest clock delta is within tolerance: 54.288833ms
	I1008 18:08:02.258806  554606 start.go:83] releasing machines lock for "ha-094095", held for 1m31.862459178s
	I1008 18:08:02.258833  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.259096  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:02.261710  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262158  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.262188  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262371  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263003  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263184  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263270  554606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:08:02.263327  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.263460  554606 ssh_runner.go:195] Run: cat /version.json
	I1008 18:08:02.263503  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.265924  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.265995  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266403  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266430  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266457  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266477  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266518  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266670  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266732  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266849  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266943  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267005  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.365833  554606 ssh_runner.go:195] Run: systemctl --version
	I1008 18:08:02.371662  554606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:08:02.527309  554606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:08:02.535812  554606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:08:02.535865  554606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:08:02.545223  554606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:08:02.545243  554606 start.go:495] detecting cgroup driver to use...
	I1008 18:08:02.545296  554606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:08:02.563394  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:08:02.576622  554606 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:08:02.576674  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:08:02.590489  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:08:02.603593  554606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:08:02.770906  554606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:08:02.915368  554606 docker.go:233] disabling docker service ...
	I1008 18:08:02.915466  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:08:02.936728  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:08:02.950842  554606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:08:03.095821  554606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:08:03.234839  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:08:03.248800  554606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:08:03.267293  554606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:08:03.267428  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.277401  554606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:08:03.277462  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.287120  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.296801  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.306442  554606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:08:03.316601  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.326858  554606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.337481  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.347229  554606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:08:03.356092  554606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:08:03.364690  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:03.501121  554606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:08:03.715791  554606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:08:03.715876  554606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:08:03.722347  554606 start.go:563] Will wait 60s for crictl version
	I1008 18:08:03.722394  554606 ssh_runner.go:195] Run: which crictl
	I1008 18:08:03.726190  554606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:08:03.763603  554606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:08:03.763681  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.792418  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.820998  554606 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:08:03.822155  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:03.824610  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.824970  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:03.825009  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.825195  554606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:08:03.829696  554606 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:08:03.829876  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:08:03.829939  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.872344  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.872365  554606 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:08:03.872416  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.906663  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.906695  554606 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:08:03.906708  554606 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 18:08:03.906862  554606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:08:03.907028  554606 ssh_runner.go:195] Run: crio config
	I1008 18:08:03.951823  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:08:03.951846  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:08:03.951865  554606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:08:03.951907  554606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:08:03.952075  554606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:08:03.952094  554606 kube-vip.go:115] generating kube-vip config ...
	I1008 18:08:03.952132  554606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 18:08:03.963592  554606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 18:08:03.963708  554606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 18:08:03.963763  554606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:08:03.973321  554606 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:08:03.973373  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 18:08:03.982394  554606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 18:08:03.998160  554606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:08:04.013870  554606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 18:08:04.029444  554606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 18:08:04.046746  554606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 18:08:04.050385  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:04.187480  554606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:08:04.202649  554606 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 18:08:04.202687  554606 certs.go:194] generating shared ca certs ...
	I1008 18:08:04.202710  554606 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.202895  554606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:08:04.202965  554606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:08:04.202980  554606 certs.go:256] generating profile certs ...
	I1008 18:08:04.203088  554606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 18:08:04.203120  554606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79
	I1008 18:08:04.203141  554606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 18:08:04.324047  554606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 ...
	I1008 18:08:04.324079  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79: {Name:mkea1c36701ecaaf5ae2823ac93dc15356845d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324274  554606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 ...
	I1008 18:08:04.324290  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79: {Name:mk673dcd3f7e7c34d453d1db5465641c8c2171a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324401  554606 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 18:08:04.324572  554606 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 18:08:04.324713  554606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 18:08:04.324729  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:08:04.324747  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:08:04.324763  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:08:04.324778  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:08:04.324790  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:08:04.324802  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:08:04.324817  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:08:04.324829  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:08:04.324876  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:08:04.324906  554606 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:08:04.324915  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:08:04.324935  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:08:04.324958  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:08:04.324978  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:08:04.325017  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:04.325042  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.325053  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.325065  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.325639  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:08:04.401305  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:08:04.478449  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:08:04.538212  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:08:04.581701  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 18:08:04.635621  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 18:08:04.690410  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:08:04.754328  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:08:04.804687  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:08:04.844003  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:08:04.866773  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:08:04.888901  554606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:08:04.904800  554606 ssh_runner.go:195] Run: openssl version
	I1008 18:08:04.910960  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:08:04.921572  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925704  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925756  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.931381  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:08:04.940576  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:08:04.951135  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955322  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955378  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.960810  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:08:04.970166  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:08:04.981978  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986369  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986454  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.991920  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:08:05.002388  554606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:08:05.006822  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:08:05.012669  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:08:05.017903  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:08:05.023233  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:08:05.028502  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:08:05.033829  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:08:05.039332  554606 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:08:05.039457  554606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:08:05.039510  554606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:08:05.075684  554606 cri.go:89] found id: "1edf8eb6e4926a4b6f2b1390395c74658d8de0dab758bd25c49ada9d10eb3c62"
	I1008 18:08:05.075705  554606 cri.go:89] found id: "b95002ecb2a0f8c5c902f6d39cb0e4879a684b4a2df25f8f0f02f90fc40edfaf"
	I1008 18:08:05.075710  554606 cri.go:89] found id: "d615cf41c0a26ef67b73c71070f51f4940d14b5b95993e26b459162737dca2c0"
	I1008 18:08:05.075715  554606 cri.go:89] found id: "1a6be7a71e09bd0d7a450960a731ccb779d3f72354128aef9d0612dd74010f3f"
	I1008 18:08:05.075719  554606 cri.go:89] found id: "7d5d2f2ee52fd7aba2e1ed86f7ad04199387d01320ea5f11b2bbd8a3f37d8e19"
	I1008 18:08:05.075724  554606 cri.go:89] found id: "079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee"
	I1008 18:08:05.075728  554606 cri.go:89] found id: "1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02"
	I1008 18:08:05.075731  554606 cri.go:89] found id: "dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3"
	I1008 18:08:05.075734  554606 cri.go:89] found id: "17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a"
	I1008 18:08:05.075744  554606 cri.go:89] found id: "347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034"
	I1008 18:08:05.075759  554606 cri.go:89] found id: "8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d"
	I1008 18:08:05.075764  554606 cri.go:89] found id: "9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7"
	I1008 18:08:05.075767  554606 cri.go:89] found id: "3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b"
	I1008 18:08:05.075774  554606 cri.go:89] found id: "0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20"
	I1008 18:08:05.075782  554606 cri.go:89] found id: "ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb"
	I1008 18:08:05.075787  554606 cri.go:89] found id: ""
	I1008 18:08:05.075841  554606 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-094095 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-094095
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (4.256707103s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-094095 node start m02 -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095 -v=7                                                          | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-094095 -v=7                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-094095 --wait=true -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:06 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:06:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:06:30.309137  554606 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:06:30.309278  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309286  554606 out.go:358] Setting ErrFile to fd 2...
	I1008 18:06:30.309292  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309514  554606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:06:30.310048  554606 out.go:352] Setting JSON to false
	I1008 18:06:30.311177  554606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6542,"bootTime":1728404248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:06:30.311239  554606 start.go:139] virtualization: kvm guest
	I1008 18:06:30.314064  554606 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:06:30.315343  554606 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:06:30.315380  554606 notify.go:220] Checking for updates...
	I1008 18:06:30.317931  554606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:06:30.319349  554606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:06:30.320487  554606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:06:30.321485  554606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:06:30.322477  554606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:06:30.323977  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:30.324106  554606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:06:30.324624  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.324671  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.339874  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1008 18:06:30.340381  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.341072  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.341127  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.341483  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.341654  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.375512  554606 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:06:30.376454  554606 start.go:297] selected driver: kvm2
	I1008 18:06:30.376466  554606 start.go:901] validating driver "kvm2" against &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.376624  554606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:06:30.376959  554606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.377044  554606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:06:30.391484  554606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:06:30.392523  554606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:06:30.392590  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:06:30.392666  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:06:30.392787  554606 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.393008  554606 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.394646  554606 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 18:06:30.395834  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:06:30.395871  554606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:06:30.395884  554606 cache.go:56] Caching tarball of preloaded images
	I1008 18:06:30.395977  554606 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:06:30.395992  554606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:06:30.396098  554606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:06:30.396294  554606 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:06:30.396336  554606 start.go:364] duration metric: took 25.244µs to acquireMachinesLock for "ha-094095"
	I1008 18:06:30.396355  554606 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:06:30.396364  554606 fix.go:54] fixHost starting: 
	I1008 18:06:30.396631  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.396667  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.410133  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I1008 18:06:30.410601  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.411054  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.411079  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.411411  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.411582  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.411739  554606 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 18:06:30.413026  554606 fix.go:112] recreateIfNeeded on ha-094095: state=Running err=<nil>
	W1008 18:06:30.413058  554606 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:06:30.414579  554606 out.go:177] * Updating the running kvm2 "ha-094095" VM ...
	I1008 18:06:30.415651  554606 machine.go:93] provisionDockerMachine start ...
	I1008 18:06:30.415671  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.415848  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.418450  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.418937  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.418961  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.419103  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.419284  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419446  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419606  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.419778  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.420056  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.420074  554606 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:06:30.527850  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.527883  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528141  554606 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 18:06:30.528169  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528335  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.530991  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531397  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.531419  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531520  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.531702  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.531851  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.532037  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.532201  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.532384  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.532397  554606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 18:06:30.657746  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.657776  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.660255  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660584  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.660613  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660854  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.661042  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661234  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661339  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.661486  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.661678  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.661694  554606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:06:30.771861  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:06:30.771897  554606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:06:30.771927  554606 buildroot.go:174] setting up certificates
	I1008 18:06:30.771935  554606 provision.go:84] configureAuth start
	I1008 18:06:30.771945  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.772190  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:06:30.774789  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775138  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.775159  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775238  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.777464  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777796  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.777820  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777940  554606 provision.go:143] copyHostCerts
	I1008 18:06:30.777975  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778033  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:06:30.778044  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778108  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:06:30.778196  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778213  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:06:30.778219  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778243  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:06:30.778299  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778314  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:06:30.778342  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778371  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:06:30.778444  554606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 18:06:30.957867  554606 provision.go:177] copyRemoteCerts
	I1008 18:06:30.957933  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:06:30.957968  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.960618  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.960989  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.961015  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.961231  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.961399  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.961567  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.961712  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:06:31.044849  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:06:31.044943  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:06:31.071107  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:06:31.071180  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 18:06:31.095762  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:06:31.095846  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:06:31.121110  554606 provision.go:87] duration metric: took 349.162437ms to configureAuth
	I1008 18:06:31.121135  554606 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:06:31.121372  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:31.121456  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:31.124338  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124715  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:31.124743  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124960  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:31.125168  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125328  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125469  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:31.125643  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:31.125857  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:31.125872  554606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:08:01.946716  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:08:01.946753  554606 machine.go:96] duration metric: took 1m31.531085514s to provisionDockerMachine
	I1008 18:08:01.946788  554606 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 18:08:01.946804  554606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:08:01.946874  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:01.947275  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:08:01.947304  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:01.950626  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951103  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:01.951131  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951290  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:01.951497  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:01.951639  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:01.951781  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.033385  554606 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:08:02.037411  554606 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:08:02.037435  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:08:02.037506  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:08:02.037603  554606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:08:02.037613  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:08:02.037727  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:08:02.046918  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:02.069405  554606 start.go:296] duration metric: took 122.60226ms for postStartSetup
	I1008 18:08:02.069448  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.069754  554606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1008 18:08:02.069786  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.072518  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072838  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.072865  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072992  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.073180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.073331  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.073508  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	W1008 18:08:02.152610  554606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1008 18:08:02.152641  554606 fix.go:56] duration metric: took 1m31.756277865s for fixHost
	I1008 18:08:02.152667  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.155151  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155507  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.155533  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155699  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.155924  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156085  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.156317  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:08:02.156548  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:08:02.156560  554606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:08:02.258737  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410882.206938077
	
	I1008 18:08:02.258770  554606 fix.go:216] guest clock: 1728410882.206938077
	I1008 18:08:02.258778  554606 fix.go:229] Guest: 2024-10-08 18:08:02.206938077 +0000 UTC Remote: 2024-10-08 18:08:02.152649244 +0000 UTC m=+91.884799909 (delta=54.288833ms)
	I1008 18:08:02.258799  554606 fix.go:200] guest clock delta is within tolerance: 54.288833ms
	I1008 18:08:02.258806  554606 start.go:83] releasing machines lock for "ha-094095", held for 1m31.862459178s
	I1008 18:08:02.258833  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.259096  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:02.261710  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262158  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.262188  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262371  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263003  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263184  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263270  554606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:08:02.263327  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.263460  554606 ssh_runner.go:195] Run: cat /version.json
	I1008 18:08:02.263503  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.265924  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.265995  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266403  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266430  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266457  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266477  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266518  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266670  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266732  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266849  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266943  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267005  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.365833  554606 ssh_runner.go:195] Run: systemctl --version
	I1008 18:08:02.371662  554606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:08:02.527309  554606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:08:02.535812  554606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:08:02.535865  554606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:08:02.545223  554606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:08:02.545243  554606 start.go:495] detecting cgroup driver to use...
	I1008 18:08:02.545296  554606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:08:02.563394  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:08:02.576622  554606 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:08:02.576674  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:08:02.590489  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:08:02.603593  554606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:08:02.770906  554606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:08:02.915368  554606 docker.go:233] disabling docker service ...
	I1008 18:08:02.915466  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:08:02.936728  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:08:02.950842  554606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:08:03.095821  554606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:08:03.234839  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:08:03.248800  554606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:08:03.267293  554606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:08:03.267428  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.277401  554606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:08:03.277462  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.287120  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.296801  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.306442  554606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:08:03.316601  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.326858  554606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.337481  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.347229  554606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:08:03.356092  554606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:08:03.364690  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:03.501121  554606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:08:03.715791  554606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:08:03.715876  554606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:08:03.722347  554606 start.go:563] Will wait 60s for crictl version
	I1008 18:08:03.722394  554606 ssh_runner.go:195] Run: which crictl
	I1008 18:08:03.726190  554606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:08:03.763603  554606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:08:03.763681  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.792418  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.820998  554606 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:08:03.822155  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:03.824610  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.824970  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:03.825009  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.825195  554606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:08:03.829696  554606 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:08:03.829876  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:08:03.829939  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.872344  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.872365  554606 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:08:03.872416  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.906663  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.906695  554606 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:08:03.906708  554606 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 18:08:03.906862  554606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:08:03.907028  554606 ssh_runner.go:195] Run: crio config
	I1008 18:08:03.951823  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:08:03.951846  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:08:03.951865  554606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:08:03.951907  554606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:08:03.952075  554606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:08:03.952094  554606 kube-vip.go:115] generating kube-vip config ...
	I1008 18:08:03.952132  554606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 18:08:03.963592  554606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 18:08:03.963708  554606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 18:08:03.963763  554606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:08:03.973321  554606 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:08:03.973373  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 18:08:03.982394  554606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 18:08:03.998160  554606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:08:04.013870  554606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 18:08:04.029444  554606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 18:08:04.046746  554606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 18:08:04.050385  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:04.187480  554606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:08:04.202649  554606 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 18:08:04.202687  554606 certs.go:194] generating shared ca certs ...
	I1008 18:08:04.202710  554606 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.202895  554606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:08:04.202965  554606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:08:04.202980  554606 certs.go:256] generating profile certs ...
	I1008 18:08:04.203088  554606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 18:08:04.203120  554606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79
	I1008 18:08:04.203141  554606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 18:08:04.324047  554606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 ...
	I1008 18:08:04.324079  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79: {Name:mkea1c36701ecaaf5ae2823ac93dc15356845d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324274  554606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 ...
	I1008 18:08:04.324290  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79: {Name:mk673dcd3f7e7c34d453d1db5465641c8c2171a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324401  554606 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 18:08:04.324572  554606 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 18:08:04.324713  554606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 18:08:04.324729  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:08:04.324747  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:08:04.324763  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:08:04.324778  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:08:04.324790  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:08:04.324802  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:08:04.324817  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:08:04.324829  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:08:04.324876  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:08:04.324906  554606 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:08:04.324915  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:08:04.324935  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:08:04.324958  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:08:04.324978  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:08:04.325017  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:04.325042  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.325053  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.325065  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.325639  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:08:04.401305  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:08:04.478449  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:08:04.538212  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:08:04.581701  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 18:08:04.635621  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 18:08:04.690410  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:08:04.754328  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:08:04.804687  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:08:04.844003  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:08:04.866773  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:08:04.888901  554606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:08:04.904800  554606 ssh_runner.go:195] Run: openssl version
	I1008 18:08:04.910960  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:08:04.921572  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925704  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925756  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.931381  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:08:04.940576  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:08:04.951135  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955322  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955378  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.960810  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:08:04.970166  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:08:04.981978  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986369  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986454  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.991920  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:08:05.002388  554606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:08:05.006822  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:08:05.012669  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:08:05.017903  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:08:05.023233  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:08:05.028502  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:08:05.033829  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:08:05.039332  554606 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:08:05.039457  554606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:08:05.039510  554606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:08:05.075684  554606 cri.go:89] found id: "1edf8eb6e4926a4b6f2b1390395c74658d8de0dab758bd25c49ada9d10eb3c62"
	I1008 18:08:05.075705  554606 cri.go:89] found id: "b95002ecb2a0f8c5c902f6d39cb0e4879a684b4a2df25f8f0f02f90fc40edfaf"
	I1008 18:08:05.075710  554606 cri.go:89] found id: "d615cf41c0a26ef67b73c71070f51f4940d14b5b95993e26b459162737dca2c0"
	I1008 18:08:05.075715  554606 cri.go:89] found id: "1a6be7a71e09bd0d7a450960a731ccb779d3f72354128aef9d0612dd74010f3f"
	I1008 18:08:05.075719  554606 cri.go:89] found id: "7d5d2f2ee52fd7aba2e1ed86f7ad04199387d01320ea5f11b2bbd8a3f37d8e19"
	I1008 18:08:05.075724  554606 cri.go:89] found id: "079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee"
	I1008 18:08:05.075728  554606 cri.go:89] found id: "1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02"
	I1008 18:08:05.075731  554606 cri.go:89] found id: "dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3"
	I1008 18:08:05.075734  554606 cri.go:89] found id: "17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a"
	I1008 18:08:05.075744  554606 cri.go:89] found id: "347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034"
	I1008 18:08:05.075759  554606 cri.go:89] found id: "8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d"
	I1008 18:08:05.075764  554606 cri.go:89] found id: "9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7"
	I1008 18:08:05.075767  554606 cri.go:89] found id: "3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b"
	I1008 18:08:05.075774  554606 cri.go:89] found id: "0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20"
	I1008 18:08:05.075782  554606 cri.go:89] found id: "ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb"
	I1008 18:08:05.075787  554606 cri.go:89] found id: ""
	I1008 18:08:05.075841  554606 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-094095 describe pod etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-094095 describe pod etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03: exit status 1 (62.878214ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-apiserver-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-094095-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-094095 describe pod etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (797.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 node delete m03 -v=7 --alsologtostderr: (5.469628732s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr: exit status 7 (486.561605ms)

                                                
                                                
-- stdout --
	ha-094095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-094095-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-094095-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:17:50.877559  557922 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:17:50.877828  557922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:17:50.877838  557922 out.go:358] Setting ErrFile to fd 2...
	I1008 18:17:50.877844  557922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:17:50.878017  557922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:17:50.878202  557922 out.go:352] Setting JSON to false
	I1008 18:17:50.878235  557922 mustload.go:65] Loading cluster: ha-094095
	I1008 18:17:50.878370  557922 notify.go:220] Checking for updates...
	I1008 18:17:50.878972  557922 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:17:50.879002  557922 status.go:174] checking status of ha-094095 ...
	I1008 18:17:50.879469  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:50.879540  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:50.901501  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I1008 18:17:50.902016  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:50.902707  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:50.902738  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:50.903086  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:50.903264  557922 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 18:17:50.904979  557922 status.go:371] ha-094095 host status = "Running" (err=<nil>)
	I1008 18:17:50.904998  557922 host.go:66] Checking if "ha-094095" exists ...
	I1008 18:17:50.905394  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:50.905460  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:50.920420  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45561
	I1008 18:17:50.920795  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:50.921222  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:50.921241  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:50.921538  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:50.921735  557922 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:17:50.924131  557922 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:17:50.924544  557922 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:17:50.924565  557922 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:17:50.924700  557922 host.go:66] Checking if "ha-094095" exists ...
	I1008 18:17:50.925005  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:50.925049  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:50.940812  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I1008 18:17:50.941367  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:50.941952  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:50.941974  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:50.942356  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:50.942545  557922 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:17:50.942748  557922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:17:50.942773  557922 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:17:50.945617  557922 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:17:50.946049  557922 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:17:50.946070  557922 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:17:50.946230  557922 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:17:50.946399  557922 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:17:50.946524  557922 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:17:50.946622  557922 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:17:51.025980  557922 ssh_runner.go:195] Run: systemctl --version
	I1008 18:17:51.032131  557922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:17:51.046005  557922 kubeconfig.go:125] found "ha-094095" server: "https://192.168.39.254:8443"
	I1008 18:17:51.046041  557922 api_server.go:166] Checking apiserver status ...
	I1008 18:17:51.046071  557922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:17:51.060546  557922 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5000/cgroup
	W1008 18:17:51.070782  557922 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5000/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 18:17:51.070834  557922 ssh_runner.go:195] Run: ls
	I1008 18:17:51.075253  557922 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1008 18:17:51.079647  557922 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1008 18:17:51.079674  557922 status.go:463] ha-094095 apiserver status = Running (err=<nil>)
	I1008 18:17:51.079686  557922 status.go:176] ha-094095 status: &{Name:ha-094095 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:17:51.079711  557922 status.go:174] checking status of ha-094095-m02 ...
	I1008 18:17:51.080160  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:51.080240  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:51.096455  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I1008 18:17:51.096863  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:51.097320  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:51.097344  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:51.097759  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:51.097970  557922 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 18:17:51.099671  557922 status.go:371] ha-094095-m02 host status = "Running" (err=<nil>)
	I1008 18:17:51.099689  557922 host.go:66] Checking if "ha-094095-m02" exists ...
	I1008 18:17:51.099973  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:51.100007  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:51.115569  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I1008 18:17:51.115976  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:51.116472  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:51.116498  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:51.116799  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:51.116980  557922 main.go:141] libmachine: (ha-094095-m02) Calling .GetIP
	I1008 18:17:51.120000  557922 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:17:51.120381  557922 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 19:08:16 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 18:17:51.120402  557922 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:17:51.120561  557922 host.go:66] Checking if "ha-094095-m02" exists ...
	I1008 18:17:51.120926  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:51.120998  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:51.136862  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I1008 18:17:51.137246  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:51.137839  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:51.137879  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:51.138225  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:51.138444  557922 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 18:17:51.138665  557922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:17:51.138688  557922 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 18:17:51.141084  557922 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:17:51.141513  557922 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 19:08:16 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 18:17:51.141531  557922 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:17:51.141729  557922 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 18:17:51.141893  557922 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 18:17:51.142044  557922 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 18:17:51.142212  557922 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 18:17:51.236127  557922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:17:51.256287  557922 kubeconfig.go:125] found "ha-094095" server: "https://192.168.39.254:8443"
	I1008 18:17:51.256321  557922 api_server.go:166] Checking apiserver status ...
	I1008 18:17:51.256364  557922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:17:51.273888  557922 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	W1008 18:17:51.285088  557922 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 18:17:51.285145  557922 ssh_runner.go:195] Run: ls
	I1008 18:17:51.290340  557922 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1008 18:17:51.294749  557922 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1008 18:17:51.294775  557922 status.go:463] ha-094095-m02 apiserver status = Running (err=<nil>)
	I1008 18:17:51.294786  557922 status.go:176] ha-094095-m02 status: &{Name:ha-094095-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:17:51.294807  557922 status.go:174] checking status of ha-094095-m04 ...
	I1008 18:17:51.295134  557922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:17:51.295179  557922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:17:51.310980  557922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45949
	I1008 18:17:51.311381  557922 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:17:51.311876  557922 main.go:141] libmachine: Using API Version  1
	I1008 18:17:51.311898  557922 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:17:51.312209  557922 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:17:51.312384  557922 main.go:141] libmachine: (ha-094095-m04) Calling .GetState
	I1008 18:17:51.314136  557922 status.go:371] ha-094095-m04 host status = "Stopped" (err=<nil>)
	I1008 18:17:51.314152  557922 status.go:384] host is not running, skipping remaining checks
	I1008 18:17:51.314158  557922 status.go:176] ha-094095-m04 status: &{Name:ha-094095-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (4.297875288s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-094095 node start m02 -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095 -v=7                                                          | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-094095 -v=7                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-094095 --wait=true -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:06 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC |                     |
	| node    | ha-094095 node delete m03 -v=7                                                  | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC | 08 Oct 24 18:17 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:06:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:06:30.309137  554606 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:06:30.309278  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309286  554606 out.go:358] Setting ErrFile to fd 2...
	I1008 18:06:30.309292  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309514  554606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:06:30.310048  554606 out.go:352] Setting JSON to false
	I1008 18:06:30.311177  554606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6542,"bootTime":1728404248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:06:30.311239  554606 start.go:139] virtualization: kvm guest
	I1008 18:06:30.314064  554606 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:06:30.315343  554606 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:06:30.315380  554606 notify.go:220] Checking for updates...
	I1008 18:06:30.317931  554606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:06:30.319349  554606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:06:30.320487  554606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:06:30.321485  554606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:06:30.322477  554606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:06:30.323977  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:30.324106  554606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:06:30.324624  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.324671  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.339874  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1008 18:06:30.340381  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.341072  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.341127  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.341483  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.341654  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.375512  554606 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:06:30.376454  554606 start.go:297] selected driver: kvm2
	I1008 18:06:30.376466  554606 start.go:901] validating driver "kvm2" against &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.376624  554606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:06:30.376959  554606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.377044  554606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:06:30.391484  554606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:06:30.392523  554606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:06:30.392590  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:06:30.392666  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:06:30.392787  554606 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.393008  554606 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.394646  554606 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 18:06:30.395834  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:06:30.395871  554606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:06:30.395884  554606 cache.go:56] Caching tarball of preloaded images
	I1008 18:06:30.395977  554606 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:06:30.395992  554606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:06:30.396098  554606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:06:30.396294  554606 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:06:30.396336  554606 start.go:364] duration metric: took 25.244µs to acquireMachinesLock for "ha-094095"
	I1008 18:06:30.396355  554606 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:06:30.396364  554606 fix.go:54] fixHost starting: 
	I1008 18:06:30.396631  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.396667  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.410133  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I1008 18:06:30.410601  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.411054  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.411079  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.411411  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.411582  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.411739  554606 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 18:06:30.413026  554606 fix.go:112] recreateIfNeeded on ha-094095: state=Running err=<nil>
	W1008 18:06:30.413058  554606 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:06:30.414579  554606 out.go:177] * Updating the running kvm2 "ha-094095" VM ...
	I1008 18:06:30.415651  554606 machine.go:93] provisionDockerMachine start ...
	I1008 18:06:30.415671  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.415848  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.418450  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.418937  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.418961  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.419103  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.419284  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419446  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419606  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.419778  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.420056  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.420074  554606 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:06:30.527850  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.527883  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528141  554606 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 18:06:30.528169  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528335  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.530991  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531397  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.531419  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531520  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.531702  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.531851  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.532037  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.532201  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.532384  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.532397  554606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 18:06:30.657746  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.657776  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.660255  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660584  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.660613  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660854  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.661042  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661234  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661339  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.661486  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.661678  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.661694  554606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:06:30.771861  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:06:30.771897  554606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:06:30.771927  554606 buildroot.go:174] setting up certificates
	I1008 18:06:30.771935  554606 provision.go:84] configureAuth start
	I1008 18:06:30.771945  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.772190  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:06:30.774789  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775138  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.775159  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775238  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.777464  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777796  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.777820  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777940  554606 provision.go:143] copyHostCerts
	I1008 18:06:30.777975  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778033  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:06:30.778044  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778108  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:06:30.778196  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778213  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:06:30.778219  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778243  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:06:30.778299  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778314  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:06:30.778342  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778371  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:06:30.778444  554606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 18:06:30.957867  554606 provision.go:177] copyRemoteCerts
	I1008 18:06:30.957933  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:06:30.957968  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.960618  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.960989  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.961015  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.961231  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.961399  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.961567  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.961712  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:06:31.044849  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:06:31.044943  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:06:31.071107  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:06:31.071180  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 18:06:31.095762  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:06:31.095846  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:06:31.121110  554606 provision.go:87] duration metric: took 349.162437ms to configureAuth
	I1008 18:06:31.121135  554606 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:06:31.121372  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:31.121456  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:31.124338  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124715  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:31.124743  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124960  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:31.125168  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125328  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125469  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:31.125643  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:31.125857  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:31.125872  554606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:08:01.946716  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:08:01.946753  554606 machine.go:96] duration metric: took 1m31.531085514s to provisionDockerMachine
	I1008 18:08:01.946788  554606 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 18:08:01.946804  554606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:08:01.946874  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:01.947275  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:08:01.947304  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:01.950626  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951103  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:01.951131  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951290  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:01.951497  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:01.951639  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:01.951781  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.033385  554606 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:08:02.037411  554606 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:08:02.037435  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:08:02.037506  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:08:02.037603  554606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:08:02.037613  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:08:02.037727  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:08:02.046918  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:02.069405  554606 start.go:296] duration metric: took 122.60226ms for postStartSetup
	I1008 18:08:02.069448  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.069754  554606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1008 18:08:02.069786  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.072518  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072838  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.072865  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072992  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.073180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.073331  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.073508  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	W1008 18:08:02.152610  554606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1008 18:08:02.152641  554606 fix.go:56] duration metric: took 1m31.756277865s for fixHost
	I1008 18:08:02.152667  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.155151  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155507  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.155533  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155699  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.155924  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156085  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.156317  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:08:02.156548  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:08:02.156560  554606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:08:02.258737  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410882.206938077
	
	I1008 18:08:02.258770  554606 fix.go:216] guest clock: 1728410882.206938077
	I1008 18:08:02.258778  554606 fix.go:229] Guest: 2024-10-08 18:08:02.206938077 +0000 UTC Remote: 2024-10-08 18:08:02.152649244 +0000 UTC m=+91.884799909 (delta=54.288833ms)
	I1008 18:08:02.258799  554606 fix.go:200] guest clock delta is within tolerance: 54.288833ms
	I1008 18:08:02.258806  554606 start.go:83] releasing machines lock for "ha-094095", held for 1m31.862459178s
	I1008 18:08:02.258833  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.259096  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:02.261710  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262158  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.262188  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262371  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263003  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263184  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263270  554606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:08:02.263327  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.263460  554606 ssh_runner.go:195] Run: cat /version.json
	I1008 18:08:02.263503  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.265924  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.265995  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266403  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266430  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266457  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266477  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266518  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266670  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266732  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266849  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266943  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267005  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.365833  554606 ssh_runner.go:195] Run: systemctl --version
	I1008 18:08:02.371662  554606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:08:02.527309  554606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:08:02.535812  554606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:08:02.535865  554606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:08:02.545223  554606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:08:02.545243  554606 start.go:495] detecting cgroup driver to use...
	I1008 18:08:02.545296  554606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:08:02.563394  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:08:02.576622  554606 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:08:02.576674  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:08:02.590489  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:08:02.603593  554606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:08:02.770906  554606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:08:02.915368  554606 docker.go:233] disabling docker service ...
	I1008 18:08:02.915466  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:08:02.936728  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:08:02.950842  554606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:08:03.095821  554606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:08:03.234839  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:08:03.248800  554606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:08:03.267293  554606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:08:03.267428  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.277401  554606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:08:03.277462  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.287120  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.296801  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.306442  554606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:08:03.316601  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.326858  554606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.337481  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.347229  554606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:08:03.356092  554606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:08:03.364690  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:03.501121  554606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:08:03.715791  554606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:08:03.715876  554606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:08:03.722347  554606 start.go:563] Will wait 60s for crictl version
	I1008 18:08:03.722394  554606 ssh_runner.go:195] Run: which crictl
	I1008 18:08:03.726190  554606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:08:03.763603  554606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:08:03.763681  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.792418  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.820998  554606 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:08:03.822155  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:03.824610  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.824970  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:03.825009  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.825195  554606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:08:03.829696  554606 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:08:03.829876  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:08:03.829939  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.872344  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.872365  554606 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:08:03.872416  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.906663  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.906695  554606 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:08:03.906708  554606 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 18:08:03.906862  554606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:08:03.907028  554606 ssh_runner.go:195] Run: crio config
	I1008 18:08:03.951823  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:08:03.951846  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:08:03.951865  554606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:08:03.951907  554606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:08:03.952075  554606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:08:03.952094  554606 kube-vip.go:115] generating kube-vip config ...
	I1008 18:08:03.952132  554606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 18:08:03.963592  554606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 18:08:03.963708  554606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 18:08:03.963763  554606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:08:03.973321  554606 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:08:03.973373  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 18:08:03.982394  554606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 18:08:03.998160  554606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:08:04.013870  554606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 18:08:04.029444  554606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 18:08:04.046746  554606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 18:08:04.050385  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:04.187480  554606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:08:04.202649  554606 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 18:08:04.202687  554606 certs.go:194] generating shared ca certs ...
	I1008 18:08:04.202710  554606 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.202895  554606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:08:04.202965  554606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:08:04.202980  554606 certs.go:256] generating profile certs ...
	I1008 18:08:04.203088  554606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 18:08:04.203120  554606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79
	I1008 18:08:04.203141  554606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 18:08:04.324047  554606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 ...
	I1008 18:08:04.324079  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79: {Name:mkea1c36701ecaaf5ae2823ac93dc15356845d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324274  554606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 ...
	I1008 18:08:04.324290  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79: {Name:mk673dcd3f7e7c34d453d1db5465641c8c2171a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324401  554606 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 18:08:04.324572  554606 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 18:08:04.324713  554606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 18:08:04.324729  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:08:04.324747  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:08:04.324763  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:08:04.324778  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:08:04.324790  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:08:04.324802  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:08:04.324817  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:08:04.324829  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:08:04.324876  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:08:04.324906  554606 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:08:04.324915  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:08:04.324935  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:08:04.324958  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:08:04.324978  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:08:04.325017  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:04.325042  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.325053  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.325065  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.325639  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:08:04.401305  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:08:04.478449  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:08:04.538212  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:08:04.581701  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 18:08:04.635621  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 18:08:04.690410  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:08:04.754328  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:08:04.804687  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:08:04.844003  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:08:04.866773  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:08:04.888901  554606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:08:04.904800  554606 ssh_runner.go:195] Run: openssl version
	I1008 18:08:04.910960  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:08:04.921572  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925704  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925756  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.931381  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:08:04.940576  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:08:04.951135  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955322  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955378  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.960810  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:08:04.970166  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:08:04.981978  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986369  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986454  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.991920  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:08:05.002388  554606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:08:05.006822  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:08:05.012669  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:08:05.017903  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:08:05.023233  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:08:05.028502  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:08:05.033829  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:08:05.039332  554606 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:08:05.039457  554606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:08:05.039510  554606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:08:05.075684  554606 cri.go:89] found id: "1edf8eb6e4926a4b6f2b1390395c74658d8de0dab758bd25c49ada9d10eb3c62"
	I1008 18:08:05.075705  554606 cri.go:89] found id: "b95002ecb2a0f8c5c902f6d39cb0e4879a684b4a2df25f8f0f02f90fc40edfaf"
	I1008 18:08:05.075710  554606 cri.go:89] found id: "d615cf41c0a26ef67b73c71070f51f4940d14b5b95993e26b459162737dca2c0"
	I1008 18:08:05.075715  554606 cri.go:89] found id: "1a6be7a71e09bd0d7a450960a731ccb779d3f72354128aef9d0612dd74010f3f"
	I1008 18:08:05.075719  554606 cri.go:89] found id: "7d5d2f2ee52fd7aba2e1ed86f7ad04199387d01320ea5f11b2bbd8a3f37d8e19"
	I1008 18:08:05.075724  554606 cri.go:89] found id: "079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee"
	I1008 18:08:05.075728  554606 cri.go:89] found id: "1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02"
	I1008 18:08:05.075731  554606 cri.go:89] found id: "dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3"
	I1008 18:08:05.075734  554606 cri.go:89] found id: "17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a"
	I1008 18:08:05.075744  554606 cri.go:89] found id: "347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034"
	I1008 18:08:05.075759  554606 cri.go:89] found id: "8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d"
	I1008 18:08:05.075764  554606 cri.go:89] found id: "9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7"
	I1008 18:08:05.075767  554606 cri.go:89] found id: "3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b"
	I1008 18:08:05.075774  554606 cri.go:89] found id: "0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20"
	I1008 18:08:05.075782  554606 cri.go:89] found id: "ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb"
	I1008 18:08:05.075787  554606 cri.go:89] found id: ""
	I1008 18:08:05.075841  554606 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-094095 describe pod busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-094095 describe pod busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03: exit status 1 (92.992251ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-k9b9n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7lpbc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7lpbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s (x2 over 10s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-apiserver-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-094095-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-094095 describe pod busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (10.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (5.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-094095" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-094095\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-094095\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-094095\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.99\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.65\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.33\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\
"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetri
cs\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (4.126946314s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-094095 node start m02 -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095 -v=7                                                          | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-094095 -v=7                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-094095 --wait=true -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:06 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC |                     |
	| node    | ha-094095 node delete m03 -v=7                                                  | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC | 08 Oct 24 18:17 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:06:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:06:30.309137  554606 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:06:30.309278  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309286  554606 out.go:358] Setting ErrFile to fd 2...
	I1008 18:06:30.309292  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309514  554606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:06:30.310048  554606 out.go:352] Setting JSON to false
	I1008 18:06:30.311177  554606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6542,"bootTime":1728404248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:06:30.311239  554606 start.go:139] virtualization: kvm guest
	I1008 18:06:30.314064  554606 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:06:30.315343  554606 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:06:30.315380  554606 notify.go:220] Checking for updates...
	I1008 18:06:30.317931  554606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:06:30.319349  554606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:06:30.320487  554606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:06:30.321485  554606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:06:30.322477  554606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:06:30.323977  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:30.324106  554606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:06:30.324624  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.324671  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.339874  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1008 18:06:30.340381  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.341072  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.341127  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.341483  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.341654  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.375512  554606 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:06:30.376454  554606 start.go:297] selected driver: kvm2
	I1008 18:06:30.376466  554606 start.go:901] validating driver "kvm2" against &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.376624  554606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:06:30.376959  554606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.377044  554606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:06:30.391484  554606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:06:30.392523  554606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:06:30.392590  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:06:30.392666  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:06:30.392787  554606 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.393008  554606 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.394646  554606 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 18:06:30.395834  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:06:30.395871  554606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:06:30.395884  554606 cache.go:56] Caching tarball of preloaded images
	I1008 18:06:30.395977  554606 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:06:30.395992  554606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:06:30.396098  554606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:06:30.396294  554606 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:06:30.396336  554606 start.go:364] duration metric: took 25.244µs to acquireMachinesLock for "ha-094095"
	I1008 18:06:30.396355  554606 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:06:30.396364  554606 fix.go:54] fixHost starting: 
	I1008 18:06:30.396631  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.396667  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.410133  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I1008 18:06:30.410601  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.411054  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.411079  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.411411  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.411582  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.411739  554606 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 18:06:30.413026  554606 fix.go:112] recreateIfNeeded on ha-094095: state=Running err=<nil>
	W1008 18:06:30.413058  554606 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:06:30.414579  554606 out.go:177] * Updating the running kvm2 "ha-094095" VM ...
	I1008 18:06:30.415651  554606 machine.go:93] provisionDockerMachine start ...
	I1008 18:06:30.415671  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.415848  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.418450  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.418937  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.418961  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.419103  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.419284  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419446  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419606  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.419778  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.420056  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.420074  554606 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:06:30.527850  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.527883  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528141  554606 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 18:06:30.528169  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528335  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.530991  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531397  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.531419  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531520  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.531702  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.531851  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.532037  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.532201  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.532384  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.532397  554606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 18:06:30.657746  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.657776  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.660255  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660584  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.660613  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660854  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.661042  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661234  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661339  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.661486  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.661678  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.661694  554606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:06:30.771861  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:06:30.771897  554606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:06:30.771927  554606 buildroot.go:174] setting up certificates
	I1008 18:06:30.771935  554606 provision.go:84] configureAuth start
	I1008 18:06:30.771945  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.772190  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:06:30.774789  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775138  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.775159  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775238  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.777464  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777796  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.777820  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777940  554606 provision.go:143] copyHostCerts
	I1008 18:06:30.777975  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778033  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:06:30.778044  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778108  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:06:30.778196  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778213  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:06:30.778219  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778243  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:06:30.778299  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778314  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:06:30.778342  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778371  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:06:30.778444  554606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 18:06:30.957867  554606 provision.go:177] copyRemoteCerts
	I1008 18:06:30.957933  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:06:30.957968  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.960618  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.960989  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.961015  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.961231  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.961399  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.961567  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.961712  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:06:31.044849  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:06:31.044943  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:06:31.071107  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:06:31.071180  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 18:06:31.095762  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:06:31.095846  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:06:31.121110  554606 provision.go:87] duration metric: took 349.162437ms to configureAuth
	I1008 18:06:31.121135  554606 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:06:31.121372  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:31.121456  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:31.124338  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124715  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:31.124743  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124960  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:31.125168  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125328  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125469  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:31.125643  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:31.125857  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:31.125872  554606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:08:01.946716  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:08:01.946753  554606 machine.go:96] duration metric: took 1m31.531085514s to provisionDockerMachine
	I1008 18:08:01.946788  554606 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 18:08:01.946804  554606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:08:01.946874  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:01.947275  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:08:01.947304  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:01.950626  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951103  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:01.951131  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951290  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:01.951497  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:01.951639  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:01.951781  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.033385  554606 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:08:02.037411  554606 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:08:02.037435  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:08:02.037506  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:08:02.037603  554606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:08:02.037613  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:08:02.037727  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:08:02.046918  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:02.069405  554606 start.go:296] duration metric: took 122.60226ms for postStartSetup
	I1008 18:08:02.069448  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.069754  554606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1008 18:08:02.069786  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.072518  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072838  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.072865  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072992  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.073180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.073331  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.073508  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	W1008 18:08:02.152610  554606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1008 18:08:02.152641  554606 fix.go:56] duration metric: took 1m31.756277865s for fixHost
	I1008 18:08:02.152667  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.155151  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155507  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.155533  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155699  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.155924  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156085  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.156317  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:08:02.156548  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:08:02.156560  554606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:08:02.258737  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410882.206938077
	
	I1008 18:08:02.258770  554606 fix.go:216] guest clock: 1728410882.206938077
	I1008 18:08:02.258778  554606 fix.go:229] Guest: 2024-10-08 18:08:02.206938077 +0000 UTC Remote: 2024-10-08 18:08:02.152649244 +0000 UTC m=+91.884799909 (delta=54.288833ms)
	I1008 18:08:02.258799  554606 fix.go:200] guest clock delta is within tolerance: 54.288833ms
	I1008 18:08:02.258806  554606 start.go:83] releasing machines lock for "ha-094095", held for 1m31.862459178s
	I1008 18:08:02.258833  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.259096  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:02.261710  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262158  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.262188  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262371  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263003  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263184  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263270  554606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:08:02.263327  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.263460  554606 ssh_runner.go:195] Run: cat /version.json
	I1008 18:08:02.263503  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.265924  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.265995  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266403  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266430  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266457  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266477  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266518  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266670  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266732  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266849  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266943  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267005  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.365833  554606 ssh_runner.go:195] Run: systemctl --version
	I1008 18:08:02.371662  554606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:08:02.527309  554606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:08:02.535812  554606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:08:02.535865  554606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:08:02.545223  554606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:08:02.545243  554606 start.go:495] detecting cgroup driver to use...
	I1008 18:08:02.545296  554606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:08:02.563394  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:08:02.576622  554606 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:08:02.576674  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:08:02.590489  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:08:02.603593  554606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:08:02.770906  554606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:08:02.915368  554606 docker.go:233] disabling docker service ...
	I1008 18:08:02.915466  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:08:02.936728  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:08:02.950842  554606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:08:03.095821  554606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:08:03.234839  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:08:03.248800  554606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:08:03.267293  554606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:08:03.267428  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.277401  554606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:08:03.277462  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.287120  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.296801  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.306442  554606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:08:03.316601  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.326858  554606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.337481  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.347229  554606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:08:03.356092  554606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:08:03.364690  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:03.501121  554606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:08:03.715791  554606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:08:03.715876  554606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:08:03.722347  554606 start.go:563] Will wait 60s for crictl version
	I1008 18:08:03.722394  554606 ssh_runner.go:195] Run: which crictl
	I1008 18:08:03.726190  554606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:08:03.763603  554606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:08:03.763681  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.792418  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.820998  554606 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:08:03.822155  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:03.824610  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.824970  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:03.825009  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.825195  554606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:08:03.829696  554606 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:08:03.829876  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:08:03.829939  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.872344  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.872365  554606 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:08:03.872416  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.906663  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.906695  554606 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:08:03.906708  554606 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 18:08:03.906862  554606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:08:03.907028  554606 ssh_runner.go:195] Run: crio config
	I1008 18:08:03.951823  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:08:03.951846  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:08:03.951865  554606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:08:03.951907  554606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:08:03.952075  554606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:08:03.952094  554606 kube-vip.go:115] generating kube-vip config ...
	I1008 18:08:03.952132  554606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 18:08:03.963592  554606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 18:08:03.963708  554606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 18:08:03.963763  554606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:08:03.973321  554606 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:08:03.973373  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 18:08:03.982394  554606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 18:08:03.998160  554606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:08:04.013870  554606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 18:08:04.029444  554606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 18:08:04.046746  554606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 18:08:04.050385  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:04.187480  554606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:08:04.202649  554606 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 18:08:04.202687  554606 certs.go:194] generating shared ca certs ...
	I1008 18:08:04.202710  554606 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.202895  554606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:08:04.202965  554606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:08:04.202980  554606 certs.go:256] generating profile certs ...
	I1008 18:08:04.203088  554606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 18:08:04.203120  554606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79
	I1008 18:08:04.203141  554606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 18:08:04.324047  554606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 ...
	I1008 18:08:04.324079  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79: {Name:mkea1c36701ecaaf5ae2823ac93dc15356845d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324274  554606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 ...
	I1008 18:08:04.324290  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79: {Name:mk673dcd3f7e7c34d453d1db5465641c8c2171a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324401  554606 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 18:08:04.324572  554606 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 18:08:04.324713  554606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 18:08:04.324729  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:08:04.324747  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:08:04.324763  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:08:04.324778  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:08:04.324790  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:08:04.324802  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:08:04.324817  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:08:04.324829  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:08:04.324876  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:08:04.324906  554606 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:08:04.324915  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:08:04.324935  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:08:04.324958  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:08:04.324978  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:08:04.325017  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:04.325042  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.325053  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.325065  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.325639  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:08:04.401305  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:08:04.478449  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:08:04.538212  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:08:04.581701  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 18:08:04.635621  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 18:08:04.690410  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:08:04.754328  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:08:04.804687  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:08:04.844003  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:08:04.866773  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:08:04.888901  554606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:08:04.904800  554606 ssh_runner.go:195] Run: openssl version
	I1008 18:08:04.910960  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:08:04.921572  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925704  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925756  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.931381  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:08:04.940576  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:08:04.951135  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955322  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955378  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.960810  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:08:04.970166  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:08:04.981978  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986369  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986454  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.991920  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:08:05.002388  554606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:08:05.006822  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:08:05.012669  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:08:05.017903  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:08:05.023233  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:08:05.028502  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:08:05.033829  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:08:05.039332  554606 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:08:05.039457  554606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:08:05.039510  554606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:08:05.075684  554606 cri.go:89] found id: "1edf8eb6e4926a4b6f2b1390395c74658d8de0dab758bd25c49ada9d10eb3c62"
	I1008 18:08:05.075705  554606 cri.go:89] found id: "b95002ecb2a0f8c5c902f6d39cb0e4879a684b4a2df25f8f0f02f90fc40edfaf"
	I1008 18:08:05.075710  554606 cri.go:89] found id: "d615cf41c0a26ef67b73c71070f51f4940d14b5b95993e26b459162737dca2c0"
	I1008 18:08:05.075715  554606 cri.go:89] found id: "1a6be7a71e09bd0d7a450960a731ccb779d3f72354128aef9d0612dd74010f3f"
	I1008 18:08:05.075719  554606 cri.go:89] found id: "7d5d2f2ee52fd7aba2e1ed86f7ad04199387d01320ea5f11b2bbd8a3f37d8e19"
	I1008 18:08:05.075724  554606 cri.go:89] found id: "079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee"
	I1008 18:08:05.075728  554606 cri.go:89] found id: "1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02"
	I1008 18:08:05.075731  554606 cri.go:89] found id: "dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3"
	I1008 18:08:05.075734  554606 cri.go:89] found id: "17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a"
	I1008 18:08:05.075744  554606 cri.go:89] found id: "347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034"
	I1008 18:08:05.075759  554606 cri.go:89] found id: "8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d"
	I1008 18:08:05.075764  554606 cri.go:89] found id: "9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7"
	I1008 18:08:05.075767  554606 cri.go:89] found id: "3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b"
	I1008 18:08:05.075774  554606 cri.go:89] found id: "0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20"
	I1008 18:08:05.075782  554606 cri.go:89] found id: "ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb"
	I1008 18:08:05.075787  554606 cri.go:89] found id: ""
	I1008 18:08:05.075841  554606 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:261: (dbg) Run:  kubectl --context ha-094095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-094095 describe pod busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-094095 describe pod busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03: exit status 1 (75.632846ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-k9b9n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7lpbc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7lpbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  15s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s (x2 over 15s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s (x2 over 16s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-apiserver-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-094095-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-094095-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-094095 describe pod busybox-7dff88458-k9b9n etcd-ha-094095-m03 kube-apiserver-ha-094095-m03 kube-controller-manager-ha-094095-m03 kube-vip-ha-094095-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (5.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (176.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 stop -v=7 --alsologtostderr
E1008 18:18:01.958610  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-094095 stop -v=7 --alsologtostderr: exit status 82 (2m1.765881953s)

                                                
                                                
-- stdout --
	* Stopping node "ha-094095-m04"  ...
	* Stopping node "ha-094095-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:18:01.627128  558281 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:18:01.627259  558281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:18:01.627272  558281 out.go:358] Setting ErrFile to fd 2...
	I1008 18:18:01.627278  558281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:18:01.627475  558281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:18:01.627703  558281 out.go:352] Setting JSON to false
	I1008 18:18:01.627776  558281 mustload.go:65] Loading cluster: ha-094095
	I1008 18:18:01.628136  558281 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:18:01.628214  558281 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:18:01.628387  558281 mustload.go:65] Loading cluster: ha-094095
	I1008 18:18:01.628526  558281 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:18:01.628569  558281 stop.go:39] StopHost: ha-094095-m04
	I1008 18:18:01.628923  558281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:18:01.628976  558281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:18:01.644387  558281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I1008 18:18:01.644971  558281 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:18:01.645564  558281 main.go:141] libmachine: Using API Version  1
	I1008 18:18:01.645596  558281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:18:01.645941  558281 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:18:01.649407  558281 out.go:177] * Stopping node "ha-094095-m04"  ...
	I1008 18:18:01.651024  558281 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 18:18:01.651057  558281 main.go:141] libmachine: (ha-094095-m04) Calling .DriverName
	I1008 18:18:01.651336  558281 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 18:18:01.651376  558281 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHHostname
	I1008 18:18:01.652992  558281 retry.go:31] will retry after 284.53882ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1008 18:18:01.938539  558281 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHHostname
	I1008 18:18:01.940223  558281 retry.go:31] will retry after 405.182293ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1008 18:18:02.345608  558281 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHHostname
	I1008 18:18:02.347169  558281 retry.go:31] will retry after 585.925068ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1008 18:18:02.933995  558281 main.go:141] libmachine: (ha-094095-m04) Calling .GetSSHHostname
	W1008 18:18:02.935743  558281 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1008 18:18:02.935786  558281 main.go:141] libmachine: Stopping "ha-094095-m04"...
	I1008 18:18:02.935794  558281 main.go:141] libmachine: (ha-094095-m04) Calling .GetState
	I1008 18:18:02.936950  558281 stop.go:66] stop err: Machine "ha-094095-m04" is already stopped.
	I1008 18:18:02.936990  558281 stop.go:69] host is already stopped
	I1008 18:18:02.937006  558281 stop.go:39] StopHost: ha-094095-m02
	I1008 18:18:02.937316  558281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:18:02.937366  558281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:18:02.952377  558281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I1008 18:18:02.952821  558281 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:18:02.953377  558281 main.go:141] libmachine: Using API Version  1
	I1008 18:18:02.953404  558281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:18:02.953714  558281 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:18:02.955736  558281 out.go:177] * Stopping node "ha-094095-m02"  ...
	I1008 18:18:02.956901  558281 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 18:18:02.956928  558281 main.go:141] libmachine: (ha-094095-m02) Calling .DriverName
	I1008 18:18:02.957123  558281 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 18:18:02.957145  558281 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHHostname
	I1008 18:18:02.959776  558281 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:18:02.960365  558281 main.go:141] libmachine: (ha-094095-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:c9:b2", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 19:08:16 +0000 UTC Type:0 Mac:52:54:00:28:c9:b2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:ha-094095-m02 Clientid:01:52:54:00:28:c9:b2}
	I1008 18:18:02.960385  558281 main.go:141] libmachine: (ha-094095-m02) DBG | domain ha-094095-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:28:c9:b2 in network mk-ha-094095
	I1008 18:18:02.960541  558281 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHPort
	I1008 18:18:02.960708  558281 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHKeyPath
	I1008 18:18:02.960836  558281 main.go:141] libmachine: (ha-094095-m02) Calling .GetSSHUsername
	I1008 18:18:02.961007  558281 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095-m02/id_rsa Username:docker}
	I1008 18:18:03.048241  558281 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 18:18:03.100486  558281 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 18:18:03.152953  558281 main.go:141] libmachine: Stopping "ha-094095-m02"...
	I1008 18:18:03.152988  558281 main.go:141] libmachine: (ha-094095-m02) Calling .GetState
	I1008 18:18:03.154420  558281 main.go:141] libmachine: (ha-094095-m02) Calling .Stop
	I1008 18:18:03.157731  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 0/120
	I1008 18:18:04.159132  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 1/120
	I1008 18:18:05.160573  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 2/120
	I1008 18:18:06.161839  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 3/120
	I1008 18:18:07.163195  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 4/120
	I1008 18:18:08.165653  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 5/120
	I1008 18:18:09.167270  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 6/120
	I1008 18:18:10.168696  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 7/120
	I1008 18:18:11.170536  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 8/120
	I1008 18:18:12.172042  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 9/120
	I1008 18:18:13.173856  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 10/120
	I1008 18:18:14.175277  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 11/120
	I1008 18:18:15.176589  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 12/120
	I1008 18:18:16.178307  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 13/120
	I1008 18:18:17.180589  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 14/120
	I1008 18:18:18.182936  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 15/120
	I1008 18:18:19.184659  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 16/120
	I1008 18:18:20.185826  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 17/120
	I1008 18:18:21.187183  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 18/120
	I1008 18:18:22.188301  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 19/120
	I1008 18:18:23.190109  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 20/120
	I1008 18:18:24.191621  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 21/120
	I1008 18:18:25.192967  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 22/120
	I1008 18:18:26.194141  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 23/120
	I1008 18:18:27.195507  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 24/120
	I1008 18:18:28.197166  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 25/120
	I1008 18:18:29.198378  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 26/120
	I1008 18:18:30.199721  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 27/120
	I1008 18:18:31.200850  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 28/120
	I1008 18:18:32.202191  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 29/120
	I1008 18:18:33.203893  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 30/120
	I1008 18:18:34.205301  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 31/120
	I1008 18:18:35.206435  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 32/120
	I1008 18:18:36.207794  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 33/120
	I1008 18:18:37.209014  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 34/120
	I1008 18:18:38.210849  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 35/120
	I1008 18:18:39.212551  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 36/120
	I1008 18:18:40.213792  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 37/120
	I1008 18:18:41.215258  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 38/120
	I1008 18:18:42.216845  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 39/120
	I1008 18:18:43.218618  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 40/120
	I1008 18:18:44.219968  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 41/120
	I1008 18:18:45.221263  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 42/120
	I1008 18:18:46.222851  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 43/120
	I1008 18:18:47.224199  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 44/120
	I1008 18:18:48.226077  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 45/120
	I1008 18:18:49.227335  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 46/120
	I1008 18:18:50.228730  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 47/120
	I1008 18:18:51.230098  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 48/120
	I1008 18:18:52.231337  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 49/120
	I1008 18:18:53.233129  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 50/120
	I1008 18:18:54.234359  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 51/120
	I1008 18:18:55.235652  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 52/120
	I1008 18:18:56.236794  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 53/120
	I1008 18:18:57.238193  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 54/120
	I1008 18:18:58.239810  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 55/120
	I1008 18:18:59.241609  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 56/120
	I1008 18:19:00.242777  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 57/120
	I1008 18:19:01.244011  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 58/120
	I1008 18:19:02.246377  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 59/120
	I1008 18:19:03.248155  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 60/120
	I1008 18:19:04.249467  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 61/120
	I1008 18:19:05.250790  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 62/120
	I1008 18:19:06.252147  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 63/120
	I1008 18:19:07.253453  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 64/120
	I1008 18:19:08.255232  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 65/120
	I1008 18:19:09.256511  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 66/120
	I1008 18:19:10.257833  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 67/120
	I1008 18:19:11.259173  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 68/120
	I1008 18:19:12.260525  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 69/120
	I1008 18:19:13.262225  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 70/120
	I1008 18:19:14.263529  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 71/120
	I1008 18:19:15.264779  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 72/120
	I1008 18:19:16.266078  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 73/120
	I1008 18:19:17.267464  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 74/120
	I1008 18:19:18.269671  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 75/120
	I1008 18:19:19.271031  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 76/120
	I1008 18:19:20.272725  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 77/120
	I1008 18:19:21.273949  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 78/120
	I1008 18:19:22.275350  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 79/120
	I1008 18:19:23.276865  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 80/120
	I1008 18:19:24.278111  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 81/120
	I1008 18:19:25.280021  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 82/120
	I1008 18:19:26.281415  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 83/120
	I1008 18:19:27.283021  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 84/120
	I1008 18:19:28.284709  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 85/120
	I1008 18:19:29.285966  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 86/120
	I1008 18:19:30.287250  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 87/120
	I1008 18:19:31.288509  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 88/120
	I1008 18:19:32.289783  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 89/120
	I1008 18:19:33.291435  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 90/120
	I1008 18:19:34.292682  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 91/120
	I1008 18:19:35.294057  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 92/120
	I1008 18:19:36.295606  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 93/120
	I1008 18:19:37.297014  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 94/120
	I1008 18:19:38.298904  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 95/120
	I1008 18:19:39.300150  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 96/120
	I1008 18:19:40.301416  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 97/120
	I1008 18:19:41.302626  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 98/120
	I1008 18:19:42.304044  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 99/120
	I1008 18:19:43.305876  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 100/120
	I1008 18:19:44.307220  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 101/120
	I1008 18:19:45.308430  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 102/120
	I1008 18:19:46.309769  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 103/120
	I1008 18:19:47.311057  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 104/120
	I1008 18:19:48.312882  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 105/120
	I1008 18:19:49.314359  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 106/120
	I1008 18:19:50.315844  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 107/120
	I1008 18:19:51.317191  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 108/120
	I1008 18:19:52.318620  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 109/120
	I1008 18:19:53.320260  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 110/120
	I1008 18:19:54.321425  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 111/120
	I1008 18:19:55.322978  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 112/120
	I1008 18:19:56.324183  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 113/120
	I1008 18:19:57.325892  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 114/120
	I1008 18:19:58.327500  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 115/120
	I1008 18:19:59.328834  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 116/120
	I1008 18:20:00.330303  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 117/120
	I1008 18:20:01.331648  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 118/120
	I1008 18:20:02.333293  558281 main.go:141] libmachine: (ha-094095-m02) Waiting for machine to stop 119/120
	I1008 18:20:03.334429  558281 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1008 18:20:03.334510  558281 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1008 18:20:03.336591  558281 out.go:201] 
	W1008 18:20:03.337906  558281 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1008 18:20:03.337933  558281 out.go:270] * 
	* 
	W1008 18:20:03.340994  558281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 18:20:03.342416  558281 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-094095 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr: (34.65396655s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095
E1008 18:20:51.766467  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-094095 -n ha-094095: exit status 2 (15.616243699s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-094095 logs -n 25: (3.84705797s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m04 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp testdata/cp-test.txt                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt                      |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095 sudo cat                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095.txt                                |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m02 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n                                                                | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | ha-094095-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-094095 ssh -n ha-094095-m03 sudo cat                                         | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC | 08 Oct 24 18:01 UTC |
	|         | /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-094095 node stop m02 -v=7                                                    | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:01 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-094095 node start m02 -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095 -v=7                                                          | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-094095 -v=7                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:04 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-094095 --wait=true -v=7                                                   | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:06 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-094095                                                               | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC |                     |
	| node    | ha-094095 node delete m03 -v=7                                                  | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:17 UTC | 08 Oct 24 18:17 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-094095 stop -v=7                                                             | ha-094095 | jenkins | v1.34.0 | 08 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:06:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:06:30.309137  554606 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:06:30.309278  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309286  554606 out.go:358] Setting ErrFile to fd 2...
	I1008 18:06:30.309292  554606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:06:30.309514  554606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:06:30.310048  554606 out.go:352] Setting JSON to false
	I1008 18:06:30.311177  554606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6542,"bootTime":1728404248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:06:30.311239  554606 start.go:139] virtualization: kvm guest
	I1008 18:06:30.314064  554606 out.go:177] * [ha-094095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:06:30.315343  554606 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:06:30.315380  554606 notify.go:220] Checking for updates...
	I1008 18:06:30.317931  554606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:06:30.319349  554606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:06:30.320487  554606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:06:30.321485  554606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:06:30.322477  554606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:06:30.323977  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:30.324106  554606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:06:30.324624  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.324671  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.339874  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I1008 18:06:30.340381  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.341072  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.341127  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.341483  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.341654  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.375512  554606 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:06:30.376454  554606 start.go:297] selected driver: kvm2
	I1008 18:06:30.376466  554606 start.go:901] validating driver "kvm2" against &{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.376624  554606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:06:30.376959  554606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.377044  554606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:06:30.391484  554606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:06:30.392523  554606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:06:30.392590  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:06:30.392666  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:06:30.392787  554606 start.go:340] cluster config:
	{Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fa
lse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:06:30.393008  554606 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:06:30.394646  554606 out.go:177] * Starting "ha-094095" primary control-plane node in "ha-094095" cluster
	I1008 18:06:30.395834  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:06:30.395871  554606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:06:30.395884  554606 cache.go:56] Caching tarball of preloaded images
	I1008 18:06:30.395977  554606 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:06:30.395992  554606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:06:30.396098  554606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/config.json ...
	I1008 18:06:30.396294  554606 start.go:360] acquireMachinesLock for ha-094095: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:06:30.396336  554606 start.go:364] duration metric: took 25.244µs to acquireMachinesLock for "ha-094095"
	I1008 18:06:30.396355  554606 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:06:30.396364  554606 fix.go:54] fixHost starting: 
	I1008 18:06:30.396631  554606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:06:30.396667  554606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:06:30.410133  554606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I1008 18:06:30.410601  554606 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:06:30.411054  554606 main.go:141] libmachine: Using API Version  1
	I1008 18:06:30.411079  554606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:06:30.411411  554606 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:06:30.411582  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.411739  554606 main.go:141] libmachine: (ha-094095) Calling .GetState
	I1008 18:06:30.413026  554606 fix.go:112] recreateIfNeeded on ha-094095: state=Running err=<nil>
	W1008 18:06:30.413058  554606 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:06:30.414579  554606 out.go:177] * Updating the running kvm2 "ha-094095" VM ...
	I1008 18:06:30.415651  554606 machine.go:93] provisionDockerMachine start ...
	I1008 18:06:30.415671  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:06:30.415848  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.418450  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.418937  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.418961  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.419103  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.419284  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419446  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.419606  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.419778  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.420056  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.420074  554606 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:06:30.527850  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.527883  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528141  554606 buildroot.go:166] provisioning hostname "ha-094095"
	I1008 18:06:30.528169  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.528335  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.530991  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531397  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.531419  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.531520  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.531702  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.531851  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.532037  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.532201  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.532384  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.532397  554606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-094095 && echo "ha-094095" | sudo tee /etc/hostname
	I1008 18:06:30.657746  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-094095
	
	I1008 18:06:30.657776  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.660255  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660584  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.660613  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.660854  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.661042  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661234  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.661339  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.661486  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:30.661678  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:30.661694  554606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-094095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-094095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-094095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:06:30.771861  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:06:30.771897  554606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:06:30.771927  554606 buildroot.go:174] setting up certificates
	I1008 18:06:30.771935  554606 provision.go:84] configureAuth start
	I1008 18:06:30.771945  554606 main.go:141] libmachine: (ha-094095) Calling .GetMachineName
	I1008 18:06:30.772190  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:06:30.774789  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775138  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.775159  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.775238  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.777464  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777796  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.777820  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.777940  554606 provision.go:143] copyHostCerts
	I1008 18:06:30.777975  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778033  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:06:30.778044  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:06:30.778108  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:06:30.778196  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778213  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:06:30.778219  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:06:30.778243  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:06:30.778299  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778314  554606 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:06:30.778342  554606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:06:30.778371  554606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:06:30.778444  554606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.ha-094095 san=[127.0.0.1 192.168.39.99 ha-094095 localhost minikube]
	I1008 18:06:30.957867  554606 provision.go:177] copyRemoteCerts
	I1008 18:06:30.957933  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:06:30.957968  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:30.960618  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.960989  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:30.961015  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:30.961231  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:30.961399  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:30.961567  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:30.961712  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:06:31.044849  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:06:31.044943  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:06:31.071107  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:06:31.071180  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1008 18:06:31.095762  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:06:31.095846  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:06:31.121110  554606 provision.go:87] duration metric: took 349.162437ms to configureAuth
	I1008 18:06:31.121135  554606 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:06:31.121372  554606 config.go:182] Loaded profile config "ha-094095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:06:31.121456  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:06:31.124338  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124715  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:06:31.124743  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:06:31.124960  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:06:31.125168  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125328  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:06:31.125469  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:06:31.125643  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:06:31.125857  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:06:31.125872  554606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:08:01.946716  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:08:01.946753  554606 machine.go:96] duration metric: took 1m31.531085514s to provisionDockerMachine
	I1008 18:08:01.946788  554606 start.go:293] postStartSetup for "ha-094095" (driver="kvm2")
	I1008 18:08:01.946804  554606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:08:01.946874  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:01.947275  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:08:01.947304  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:01.950626  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951103  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:01.951131  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:01.951290  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:01.951497  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:01.951639  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:01.951781  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.033385  554606 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:08:02.037411  554606 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:08:02.037435  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:08:02.037506  554606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:08:02.037603  554606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:08:02.037613  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:08:02.037727  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:08:02.046918  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:02.069405  554606 start.go:296] duration metric: took 122.60226ms for postStartSetup
	I1008 18:08:02.069448  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.069754  554606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1008 18:08:02.069786  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.072518  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072838  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.072865  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.072992  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.073180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.073331  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.073508  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	W1008 18:08:02.152610  554606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1008 18:08:02.152641  554606 fix.go:56] duration metric: took 1m31.756277865s for fixHost
	I1008 18:08:02.152667  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.155151  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155507  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.155533  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.155699  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.155924  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156085  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.156180  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.156317  554606 main.go:141] libmachine: Using SSH client type: native
	I1008 18:08:02.156548  554606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I1008 18:08:02.156560  554606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:08:02.258737  554606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410882.206938077
	
	I1008 18:08:02.258770  554606 fix.go:216] guest clock: 1728410882.206938077
	I1008 18:08:02.258778  554606 fix.go:229] Guest: 2024-10-08 18:08:02.206938077 +0000 UTC Remote: 2024-10-08 18:08:02.152649244 +0000 UTC m=+91.884799909 (delta=54.288833ms)
	I1008 18:08:02.258799  554606 fix.go:200] guest clock delta is within tolerance: 54.288833ms
	I1008 18:08:02.258806  554606 start.go:83] releasing machines lock for "ha-094095", held for 1m31.862459178s
	I1008 18:08:02.258833  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.259096  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:02.261710  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262158  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.262188  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.262371  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263003  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263184  554606 main.go:141] libmachine: (ha-094095) Calling .DriverName
	I1008 18:08:02.263270  554606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:08:02.263327  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.263460  554606 ssh_runner.go:195] Run: cat /version.json
	I1008 18:08:02.263503  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHHostname
	I1008 18:08:02.265924  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.265995  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266403  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266430  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266457  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:02.266477  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:02.266518  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266670  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHPort
	I1008 18:08:02.266732  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266849  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHKeyPath
	I1008 18:08:02.266943  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267005  554606 main.go:141] libmachine: (ha-094095) Calling .GetSSHUsername
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.267082  554606 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/ha-094095/id_rsa Username:docker}
	I1008 18:08:02.365833  554606 ssh_runner.go:195] Run: systemctl --version
	I1008 18:08:02.371662  554606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:08:02.527309  554606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:08:02.535812  554606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:08:02.535865  554606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:08:02.545223  554606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:08:02.545243  554606 start.go:495] detecting cgroup driver to use...
	I1008 18:08:02.545296  554606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:08:02.563394  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:08:02.576622  554606 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:08:02.576674  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:08:02.590489  554606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:08:02.603593  554606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:08:02.770906  554606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:08:02.915368  554606 docker.go:233] disabling docker service ...
	I1008 18:08:02.915466  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:08:02.936728  554606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:08:02.950842  554606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:08:03.095821  554606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:08:03.234839  554606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:08:03.248800  554606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:08:03.267293  554606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:08:03.267428  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.277401  554606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:08:03.277462  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.287120  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.296801  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.306442  554606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:08:03.316601  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.326858  554606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.337481  554606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:08:03.347229  554606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:08:03.356092  554606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:08:03.364690  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:03.501121  554606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:08:03.715791  554606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:08:03.715876  554606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:08:03.722347  554606 start.go:563] Will wait 60s for crictl version
	I1008 18:08:03.722394  554606 ssh_runner.go:195] Run: which crictl
	I1008 18:08:03.726190  554606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:08:03.763603  554606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:08:03.763681  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.792418  554606 ssh_runner.go:195] Run: crio --version
	I1008 18:08:03.820998  554606 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:08:03.822155  554606 main.go:141] libmachine: (ha-094095) Calling .GetIP
	I1008 18:08:03.824610  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.824970  554606 main.go:141] libmachine: (ha-094095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:fa:3a", ip: ""} in network mk-ha-094095: {Iface:virbr1 ExpiryTime:2024-10-08 18:57:33 +0000 UTC Type:0 Mac:52:54:00:bf:fa:3a Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-094095 Clientid:01:52:54:00:bf:fa:3a}
	I1008 18:08:03.825009  554606 main.go:141] libmachine: (ha-094095) DBG | domain ha-094095 has defined IP address 192.168.39.99 and MAC address 52:54:00:bf:fa:3a in network mk-ha-094095
	I1008 18:08:03.825195  554606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:08:03.829696  554606 kubeadm.go:883] updating cluster {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:08:03.829876  554606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:08:03.829939  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.872344  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.872365  554606 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:08:03.872416  554606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:08:03.906663  554606 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:08:03.906695  554606 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:08:03.906708  554606 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.31.1 crio true true} ...
	I1008 18:08:03.906862  554606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-094095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:08:03.907028  554606 ssh_runner.go:195] Run: crio config
	I1008 18:08:03.951823  554606 cni.go:84] Creating CNI manager for ""
	I1008 18:08:03.951846  554606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1008 18:08:03.951865  554606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:08:03.951907  554606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-094095 NodeName:ha-094095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:08:03.952075  554606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-094095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:08:03.952094  554606 kube-vip.go:115] generating kube-vip config ...
	I1008 18:08:03.952132  554606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1008 18:08:03.963592  554606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1008 18:08:03.963708  554606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1008 18:08:03.963763  554606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:08:03.973321  554606 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:08:03.973373  554606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1008 18:08:03.982394  554606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1008 18:08:03.998160  554606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:08:04.013870  554606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I1008 18:08:04.029444  554606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1008 18:08:04.046746  554606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1008 18:08:04.050385  554606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:08:04.187480  554606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:08:04.202649  554606 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095 for IP: 192.168.39.99
	I1008 18:08:04.202687  554606 certs.go:194] generating shared ca certs ...
	I1008 18:08:04.202710  554606 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.202895  554606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:08:04.202965  554606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:08:04.202980  554606 certs.go:256] generating profile certs ...
	I1008 18:08:04.203088  554606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/client.key
	I1008 18:08:04.203120  554606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79
	I1008 18:08:04.203141  554606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.65 192.168.39.194 192.168.39.254]
	I1008 18:08:04.324047  554606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 ...
	I1008 18:08:04.324079  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79: {Name:mkea1c36701ecaaf5ae2823ac93dc15356845d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324274  554606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 ...
	I1008 18:08:04.324290  554606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79: {Name:mk673dcd3f7e7c34d453d1db5465641c8c2171a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:08:04.324401  554606 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt
	I1008 18:08:04.324572  554606 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key.0effbd79 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key
	I1008 18:08:04.324713  554606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key
	I1008 18:08:04.324729  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:08:04.324747  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:08:04.324763  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:08:04.324778  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:08:04.324790  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:08:04.324802  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:08:04.324817  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:08:04.324829  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:08:04.324876  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:08:04.324906  554606 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:08:04.324915  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:08:04.324935  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:08:04.324958  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:08:04.324978  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:08:04.325017  554606 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:08:04.325042  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.325053  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.325065  554606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.325639  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:08:04.401305  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:08:04.478449  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:08:04.538212  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:08:04.581701  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 18:08:04.635621  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 18:08:04.690410  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:08:04.754328  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/ha-094095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:08:04.804687  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:08:04.844003  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:08:04.866773  554606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:08:04.888901  554606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:08:04.904800  554606 ssh_runner.go:195] Run: openssl version
	I1008 18:08:04.910960  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:08:04.921572  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925704  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.925756  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:08:04.931381  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:08:04.940576  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:08:04.951135  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955322  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.955378  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:08:04.960810  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:08:04.970166  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:08:04.981978  554606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986369  554606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.986454  554606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:08:04.991920  554606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:08:05.002388  554606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:08:05.006822  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:08:05.012669  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:08:05.017903  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:08:05.023233  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:08:05.028502  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:08:05.033829  554606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:08:05.039332  554606 kubeadm.go:392] StartCluster: {Name:ha-094095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-094095 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.65 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.33 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:08:05.039457  554606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:08:05.039510  554606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:08:05.075684  554606 cri.go:89] found id: "1edf8eb6e4926a4b6f2b1390395c74658d8de0dab758bd25c49ada9d10eb3c62"
	I1008 18:08:05.075705  554606 cri.go:89] found id: "b95002ecb2a0f8c5c902f6d39cb0e4879a684b4a2df25f8f0f02f90fc40edfaf"
	I1008 18:08:05.075710  554606 cri.go:89] found id: "d615cf41c0a26ef67b73c71070f51f4940d14b5b95993e26b459162737dca2c0"
	I1008 18:08:05.075715  554606 cri.go:89] found id: "1a6be7a71e09bd0d7a450960a731ccb779d3f72354128aef9d0612dd74010f3f"
	I1008 18:08:05.075719  554606 cri.go:89] found id: "7d5d2f2ee52fd7aba2e1ed86f7ad04199387d01320ea5f11b2bbd8a3f37d8e19"
	I1008 18:08:05.075724  554606 cri.go:89] found id: "079e7a8fee78ff7d7b1af73386e3bc304da1fe17a7e72f0fc9f0ab564efdb1ee"
	I1008 18:08:05.075728  554606 cri.go:89] found id: "1eb4935d542c2dde5eb8cc5097e277923df97ad90bdbc6b1644c3fd4a1989c02"
	I1008 18:08:05.075731  554606 cri.go:89] found id: "dfdfc8735b8229f0ad3194ebe15f59b898d9db2be87701cf17bbd3c4212f4ec3"
	I1008 18:08:05.075734  554606 cri.go:89] found id: "17a4523dfe3c8888bec2367e73c575639f5ef4084c55e4bc28ad05070043b94a"
	I1008 18:08:05.075744  554606 cri.go:89] found id: "347854044c2941df1b50ca01ae43ba280b7a94bfbb8de921b38e2c79f4317034"
	I1008 18:08:05.075759  554606 cri.go:89] found id: "8f117035b9a9ad0d6413a03e5b852cfc28d1c799c5a27f21e69ea7f55667808d"
	I1008 18:08:05.075764  554606 cri.go:89] found id: "9c418725a44b78f6beceb57e89c5493fb423d343b205b5c90764617539361af7"
	I1008 18:08:05.075767  554606 cri.go:89] found id: "3b8241e00230e86e5e4e31150ce44700332613c424ae065caf7f820c4e152d4b"
	I1008 18:08:05.075774  554606 cri.go:89] found id: "0224d96e8ab1ab0168a2e222ef1fd31a03de32f2d7524c62a8a8230fc453ac20"
	I1008 18:08:05.075782  554606 cri.go:89] found id: "ec97e876ef66b7e682c97edd99f8440cbc588095010bd6fc4f10833a17854bfb"
	I1008 18:08:05.075787  554606 cri.go:89] found id: ""
	I1008 18:08:05.075841  554606 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-094095 -n ha-094095: exit status 2 (225.784388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-094095" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (176.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (317.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-255508
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-255508
E1008 18:34:41.960904  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-255508: exit status 82 (2m1.826022841s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-255508-m03"  ...
	* Stopping node "multinode-255508-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-255508" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-255508 --wait=true -v=8 --alsologtostderr
E1008 18:35:51.765847  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:36:38.897254  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-255508 --wait=true -v=8 --alsologtostderr: (3m13.119461025s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-255508
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-255508 -n multinode-255508
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 logs -n 25: (1.852868957s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile543339778/001/cp-test_multinode-255508-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508:/home/docker/cp-test_multinode-255508-m02_multinode-255508.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508 sudo cat                                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m02_multinode-255508.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03:/home/docker/cp-test_multinode-255508-m02_multinode-255508-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508-m03 sudo cat                                   | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m02_multinode-255508-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp testdata/cp-test.txt                                                | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile543339778/001/cp-test_multinode-255508-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508:/home/docker/cp-test_multinode-255508-m03_multinode-255508.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508 sudo cat                                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m03_multinode-255508.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02:/home/docker/cp-test_multinode-255508-m03_multinode-255508-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508-m02 sudo cat                                   | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m03_multinode-255508-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-255508 node stop m03                                                          | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	| node    | multinode-255508 node start                                                             | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:33 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-255508                                                                | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:33 UTC |                     |
	| stop    | -p multinode-255508                                                                     | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:33 UTC |                     |
	| start   | -p multinode-255508                                                                     | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:35 UTC | 08 Oct 24 18:38 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-255508                                                                | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:38 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:35:38
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:35:38.050074  568041 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:35:38.050173  568041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:35:38.050181  568041 out.go:358] Setting ErrFile to fd 2...
	I1008 18:35:38.050184  568041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:35:38.050401  568041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:35:38.050928  568041 out.go:352] Setting JSON to false
	I1008 18:35:38.051885  568041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8290,"bootTime":1728404248,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:35:38.051982  568041 start.go:139] virtualization: kvm guest
	I1008 18:35:38.055003  568041 out.go:177] * [multinode-255508] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:35:38.056363  568041 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:35:38.056443  568041 notify.go:220] Checking for updates...
	I1008 18:35:38.058769  568041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:35:38.059994  568041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:35:38.061132  568041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:35:38.062376  568041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:35:38.063484  568041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:35:38.064918  568041 config.go:182] Loaded profile config "multinode-255508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:35:38.065012  568041 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:35:38.065454  568041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:35:38.065538  568041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:35:38.080667  568041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1008 18:35:38.081169  568041 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:35:38.081796  568041 main.go:141] libmachine: Using API Version  1
	I1008 18:35:38.081820  568041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:35:38.082199  568041 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:35:38.082396  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:35:38.116329  568041 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:35:38.117402  568041 start.go:297] selected driver: kvm2
	I1008 18:35:38.117414  568041 start.go:901] validating driver "kvm2" against &{Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:35:38.117561  568041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:35:38.117891  568041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:35:38.117966  568041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:35:38.132625  568041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:35:38.133354  568041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:35:38.133388  568041 cni.go:84] Creating CNI manager for ""
	I1008 18:35:38.133448  568041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1008 18:35:38.133523  568041 start.go:340] cluster config:
	{Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-255508 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:35:38.133693  568041 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:35:38.135213  568041 out.go:177] * Starting "multinode-255508" primary control-plane node in "multinode-255508" cluster
	I1008 18:35:38.136253  568041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:35:38.136291  568041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:35:38.136304  568041 cache.go:56] Caching tarball of preloaded images
	I1008 18:35:38.136386  568041 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:35:38.136401  568041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:35:38.136546  568041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/config.json ...
	I1008 18:35:38.136764  568041 start.go:360] acquireMachinesLock for multinode-255508: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:35:38.136810  568041 start.go:364] duration metric: took 25.242µs to acquireMachinesLock for "multinode-255508"
	I1008 18:35:38.136830  568041 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:35:38.136839  568041 fix.go:54] fixHost starting: 
	I1008 18:35:38.137120  568041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:35:38.137158  568041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:35:38.150937  568041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1008 18:35:38.151507  568041 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:35:38.152041  568041 main.go:141] libmachine: Using API Version  1
	I1008 18:35:38.152071  568041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:35:38.152414  568041 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:35:38.152626  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:35:38.152778  568041 main.go:141] libmachine: (multinode-255508) Calling .GetState
	I1008 18:35:38.154149  568041 fix.go:112] recreateIfNeeded on multinode-255508: state=Running err=<nil>
	W1008 18:35:38.154169  568041 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:35:38.156449  568041 out.go:177] * Updating the running kvm2 "multinode-255508" VM ...
	I1008 18:35:38.157704  568041 machine.go:93] provisionDockerMachine start ...
	I1008 18:35:38.157724  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:35:38.157912  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.160350  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.160775  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.160817  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.160897  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.161059  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.161213  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.161306  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.161444  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.161650  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.161661  568041 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:35:38.264229  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-255508
	
	I1008 18:35:38.264264  568041 main.go:141] libmachine: (multinode-255508) Calling .GetMachineName
	I1008 18:35:38.264486  568041 buildroot.go:166] provisioning hostname "multinode-255508"
	I1008 18:35:38.264530  568041 main.go:141] libmachine: (multinode-255508) Calling .GetMachineName
	I1008 18:35:38.264730  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.267490  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.267888  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.267920  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.268056  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.268249  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.268398  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.268536  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.268669  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.268864  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.268881  568041 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-255508 && echo "multinode-255508" | sudo tee /etc/hostname
	I1008 18:35:38.384761  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-255508
	
	I1008 18:35:38.384798  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.387458  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.387798  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.387834  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.387958  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.388133  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.388295  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.388467  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.388614  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.388787  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.388808  568041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-255508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-255508/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-255508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:35:38.487280  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:35:38.487316  568041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:35:38.487359  568041 buildroot.go:174] setting up certificates
	I1008 18:35:38.487376  568041 provision.go:84] configureAuth start
	I1008 18:35:38.487395  568041 main.go:141] libmachine: (multinode-255508) Calling .GetMachineName
	I1008 18:35:38.487679  568041 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:35:38.489997  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.490424  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.490449  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.490518  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.492594  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.492886  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.492921  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.493024  568041 provision.go:143] copyHostCerts
	I1008 18:35:38.493078  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:35:38.493130  568041 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:35:38.493142  568041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:35:38.493218  568041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:35:38.493342  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:35:38.493371  568041 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:35:38.493380  568041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:35:38.493424  568041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:35:38.493542  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:35:38.493572  568041 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:35:38.493578  568041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:35:38.493611  568041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:35:38.493684  568041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.multinode-255508 san=[127.0.0.1 192.168.39.43 localhost minikube multinode-255508]
	I1008 18:35:38.654473  568041 provision.go:177] copyRemoteCerts
	I1008 18:35:38.654542  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:35:38.654569  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.656972  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.657274  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.657305  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.657431  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.657623  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.657795  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.657946  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:35:38.736797  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:35:38.736859  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1008 18:35:38.760171  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:35:38.760234  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:35:38.783571  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:35:38.783638  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:35:38.805869  568041 provision.go:87] duration metric: took 318.478434ms to configureAuth
	I1008 18:35:38.805896  568041 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:35:38.806113  568041 config.go:182] Loaded profile config "multinode-255508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:35:38.806204  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.808638  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.808994  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.809027  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.809173  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.809354  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.809528  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.809671  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.809806  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.810010  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.810032  568041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:37:09.540080  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:37:09.540112  568041 machine.go:96] duration metric: took 1m31.382394022s to provisionDockerMachine
	I1008 18:37:09.540127  568041 start.go:293] postStartSetup for "multinode-255508" (driver="kvm2")
	I1008 18:37:09.540146  568041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:37:09.540201  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.540587  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:37:09.540655  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.544021  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.544464  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.544497  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.544687  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.544867  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.545021  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.545173  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:37:09.626568  568041 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:37:09.630842  568041 command_runner.go:130] > NAME=Buildroot
	I1008 18:37:09.630864  568041 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1008 18:37:09.630868  568041 command_runner.go:130] > ID=buildroot
	I1008 18:37:09.630875  568041 command_runner.go:130] > VERSION_ID=2023.02.9
	I1008 18:37:09.630881  568041 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1008 18:37:09.630921  568041 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:37:09.630938  568041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:37:09.631013  568041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:37:09.631109  568041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:37:09.631121  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:37:09.631242  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:37:09.640932  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:37:09.663866  568041 start.go:296] duration metric: took 123.725256ms for postStartSetup
	I1008 18:37:09.663910  568041 fix.go:56] duration metric: took 1m31.527071073s for fixHost
	I1008 18:37:09.663932  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.666562  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.666937  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.666969  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.667189  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.667371  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.667561  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.667702  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.667866  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:37:09.668057  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:37:09.668070  568041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:37:09.766935  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728412629.734144499
	
	I1008 18:37:09.766962  568041 fix.go:216] guest clock: 1728412629.734144499
	I1008 18:37:09.766970  568041 fix.go:229] Guest: 2024-10-08 18:37:09.734144499 +0000 UTC Remote: 2024-10-08 18:37:09.663914553 +0000 UTC m=+91.653762432 (delta=70.229946ms)
	I1008 18:37:09.767020  568041 fix.go:200] guest clock delta is within tolerance: 70.229946ms
	I1008 18:37:09.767028  568041 start.go:83] releasing machines lock for "multinode-255508", held for 1m31.630206079s
	I1008 18:37:09.767068  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.767392  568041 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:37:09.769902  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.770343  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.770375  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.770490  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.771067  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.771258  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.771370  568041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:37:09.771417  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.771464  568041 ssh_runner.go:195] Run: cat /version.json
	I1008 18:37:09.771487  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.774064  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774214  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774517  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.774543  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774622  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.774784  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.774806  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.774833  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774956  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.775040  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.775055  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.775155  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:37:09.775202  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.775317  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:37:09.846442  568041 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1008 18:37:09.846742  568041 ssh_runner.go:195] Run: systemctl --version
	I1008 18:37:09.871718  568041 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 18:37:09.872364  568041 command_runner.go:130] > systemd 252 (252)
	I1008 18:37:09.872398  568041 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1008 18:37:09.872459  568041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:37:10.030760  568041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 18:37:10.036955  568041 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 18:37:10.037029  568041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:37:10.037089  568041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:37:10.047110  568041 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:37:10.047137  568041 start.go:495] detecting cgroup driver to use...
	I1008 18:37:10.047209  568041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:37:10.063590  568041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:37:10.077002  568041 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:37:10.077053  568041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:37:10.090443  568041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:37:10.103616  568041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:37:10.248001  568041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:37:10.400774  568041 docker.go:233] disabling docker service ...
	I1008 18:37:10.400860  568041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:37:10.424900  568041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:37:10.438878  568041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:37:10.585134  568041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:37:10.727825  568041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:37:10.742222  568041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:37:10.760525  568041 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 18:37:10.760780  568041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:37:10.760869  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.771750  568041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:37:10.771831  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.782649  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.793535  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.805254  568041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:37:10.816830  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.827842  568041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.838510  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.849975  568041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:37:10.859680  568041 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 18:37:10.859745  568041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:37:10.869585  568041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:37:11.002387  568041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:37:11.195529  568041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:37:11.195613  568041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:37:11.200275  568041 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 18:37:11.200305  568041 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 18:37:11.200314  568041 command_runner.go:130] > Device: 0,22	Inode: 1269        Links: 1
	I1008 18:37:11.200325  568041 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 18:37:11.200333  568041 command_runner.go:130] > Access: 2024-10-08 18:37:11.138923277 +0000
	I1008 18:37:11.200345  568041 command_runner.go:130] > Modify: 2024-10-08 18:37:11.063920996 +0000
	I1008 18:37:11.200353  568041 command_runner.go:130] > Change: 2024-10-08 18:37:11.063920996 +0000
	I1008 18:37:11.200362  568041 command_runner.go:130] >  Birth: -
	I1008 18:37:11.200425  568041 start.go:563] Will wait 60s for crictl version
	I1008 18:37:11.200487  568041 ssh_runner.go:195] Run: which crictl
	I1008 18:37:11.204040  568041 command_runner.go:130] > /usr/bin/crictl
	I1008 18:37:11.204199  568041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:37:11.244185  568041 command_runner.go:130] > Version:  0.1.0
	I1008 18:37:11.244215  568041 command_runner.go:130] > RuntimeName:  cri-o
	I1008 18:37:11.244236  568041 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1008 18:37:11.244245  568041 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 18:37:11.244269  568041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:37:11.244369  568041 ssh_runner.go:195] Run: crio --version
	I1008 18:37:11.271722  568041 command_runner.go:130] > crio version 1.29.1
	I1008 18:37:11.271750  568041 command_runner.go:130] > Version:        1.29.1
	I1008 18:37:11.271756  568041 command_runner.go:130] > GitCommit:      unknown
	I1008 18:37:11.271760  568041 command_runner.go:130] > GitCommitDate:  unknown
	I1008 18:37:11.271763  568041 command_runner.go:130] > GitTreeState:   clean
	I1008 18:37:11.271769  568041 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1008 18:37:11.271773  568041 command_runner.go:130] > GoVersion:      go1.21.6
	I1008 18:37:11.271776  568041 command_runner.go:130] > Compiler:       gc
	I1008 18:37:11.271781  568041 command_runner.go:130] > Platform:       linux/amd64
	I1008 18:37:11.271784  568041 command_runner.go:130] > Linkmode:       dynamic
	I1008 18:37:11.271788  568041 command_runner.go:130] > BuildTags:      
	I1008 18:37:11.271792  568041 command_runner.go:130] >   containers_image_ostree_stub
	I1008 18:37:11.271796  568041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1008 18:37:11.271799  568041 command_runner.go:130] >   btrfs_noversion
	I1008 18:37:11.271804  568041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1008 18:37:11.271808  568041 command_runner.go:130] >   libdm_no_deferred_remove
	I1008 18:37:11.271811  568041 command_runner.go:130] >   seccomp
	I1008 18:37:11.271815  568041 command_runner.go:130] > LDFlags:          unknown
	I1008 18:37:11.271819  568041 command_runner.go:130] > SeccompEnabled:   true
	I1008 18:37:11.271822  568041 command_runner.go:130] > AppArmorEnabled:  false
	I1008 18:37:11.272800  568041 ssh_runner.go:195] Run: crio --version
	I1008 18:37:11.299149  568041 command_runner.go:130] > crio version 1.29.1
	I1008 18:37:11.299177  568041 command_runner.go:130] > Version:        1.29.1
	I1008 18:37:11.299185  568041 command_runner.go:130] > GitCommit:      unknown
	I1008 18:37:11.299192  568041 command_runner.go:130] > GitCommitDate:  unknown
	I1008 18:37:11.299199  568041 command_runner.go:130] > GitTreeState:   clean
	I1008 18:37:11.299208  568041 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1008 18:37:11.299215  568041 command_runner.go:130] > GoVersion:      go1.21.6
	I1008 18:37:11.299221  568041 command_runner.go:130] > Compiler:       gc
	I1008 18:37:11.299229  568041 command_runner.go:130] > Platform:       linux/amd64
	I1008 18:37:11.299238  568041 command_runner.go:130] > Linkmode:       dynamic
	I1008 18:37:11.299248  568041 command_runner.go:130] > BuildTags:      
	I1008 18:37:11.299266  568041 command_runner.go:130] >   containers_image_ostree_stub
	I1008 18:37:11.299273  568041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1008 18:37:11.299280  568041 command_runner.go:130] >   btrfs_noversion
	I1008 18:37:11.299288  568041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1008 18:37:11.299298  568041 command_runner.go:130] >   libdm_no_deferred_remove
	I1008 18:37:11.299304  568041 command_runner.go:130] >   seccomp
	I1008 18:37:11.299312  568041 command_runner.go:130] > LDFlags:          unknown
	I1008 18:37:11.299318  568041 command_runner.go:130] > SeccompEnabled:   true
	I1008 18:37:11.299326  568041 command_runner.go:130] > AppArmorEnabled:  false
	I1008 18:37:11.304853  568041 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:37:11.306134  568041 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:37:11.309068  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:11.309461  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:11.309491  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:11.309665  568041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:37:11.313580  568041 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1008 18:37:11.313753  568041 kubeadm.go:883] updating cluster {Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:37:11.313894  568041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:37:11.313935  568041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:37:11.352028  568041 command_runner.go:130] > {
	I1008 18:37:11.352049  568041 command_runner.go:130] >   "images": [
	I1008 18:37:11.352060  568041 command_runner.go:130] >     {
	I1008 18:37:11.352071  568041 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1008 18:37:11.352076  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352081  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1008 18:37:11.352085  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352088  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352096  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1008 18:37:11.352109  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1008 18:37:11.352113  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352117  568041 command_runner.go:130] >       "size": "87190579",
	I1008 18:37:11.352120  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352124  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352132  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352136  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352139  568041 command_runner.go:130] >     },
	I1008 18:37:11.352143  568041 command_runner.go:130] >     {
	I1008 18:37:11.352151  568041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1008 18:37:11.352155  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352160  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1008 18:37:11.352165  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352169  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352176  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1008 18:37:11.352182  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1008 18:37:11.352187  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352190  568041 command_runner.go:130] >       "size": "1363676",
	I1008 18:37:11.352194  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352203  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352209  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352212  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352215  568041 command_runner.go:130] >     },
	I1008 18:37:11.352219  568041 command_runner.go:130] >     {
	I1008 18:37:11.352224  568041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 18:37:11.352229  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352237  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 18:37:11.352243  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352247  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352254  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 18:37:11.352263  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 18:37:11.352266  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352273  568041 command_runner.go:130] >       "size": "31470524",
	I1008 18:37:11.352276  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352280  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352286  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352290  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352295  568041 command_runner.go:130] >     },
	I1008 18:37:11.352303  568041 command_runner.go:130] >     {
	I1008 18:37:11.352311  568041 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1008 18:37:11.352318  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352325  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1008 18:37:11.352328  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352332  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352341  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1008 18:37:11.352357  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1008 18:37:11.352363  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352367  568041 command_runner.go:130] >       "size": "63273227",
	I1008 18:37:11.352370  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352375  568041 command_runner.go:130] >       "username": "nonroot",
	I1008 18:37:11.352379  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352385  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352388  568041 command_runner.go:130] >     },
	I1008 18:37:11.352393  568041 command_runner.go:130] >     {
	I1008 18:37:11.352399  568041 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1008 18:37:11.352405  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352410  568041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1008 18:37:11.352415  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352419  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352432  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1008 18:37:11.352441  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1008 18:37:11.352446  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352450  568041 command_runner.go:130] >       "size": "149009664",
	I1008 18:37:11.352456  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352460  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352463  568041 command_runner.go:130] >       },
	I1008 18:37:11.352466  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352471  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352475  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352481  568041 command_runner.go:130] >     },
	I1008 18:37:11.352484  568041 command_runner.go:130] >     {
	I1008 18:37:11.352490  568041 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1008 18:37:11.352496  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352501  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1008 18:37:11.352515  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352521  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352528  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1008 18:37:11.352537  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1008 18:37:11.352544  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352547  568041 command_runner.go:130] >       "size": "95237600",
	I1008 18:37:11.352551  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352555  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352560  568041 command_runner.go:130] >       },
	I1008 18:37:11.352564  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352570  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352574  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352580  568041 command_runner.go:130] >     },
	I1008 18:37:11.352583  568041 command_runner.go:130] >     {
	I1008 18:37:11.352589  568041 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1008 18:37:11.352595  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352600  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1008 18:37:11.352605  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352614  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352624  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1008 18:37:11.352633  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1008 18:37:11.352637  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352640  568041 command_runner.go:130] >       "size": "89437508",
	I1008 18:37:11.352645  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352648  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352652  568041 command_runner.go:130] >       },
	I1008 18:37:11.352656  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352662  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352666  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352671  568041 command_runner.go:130] >     },
	I1008 18:37:11.352674  568041 command_runner.go:130] >     {
	I1008 18:37:11.352680  568041 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1008 18:37:11.352685  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352689  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1008 18:37:11.352694  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352697  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352719  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1008 18:37:11.352728  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1008 18:37:11.352731  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352735  568041 command_runner.go:130] >       "size": "92733849",
	I1008 18:37:11.352739  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352742  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352746  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352750  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352752  568041 command_runner.go:130] >     },
	I1008 18:37:11.352755  568041 command_runner.go:130] >     {
	I1008 18:37:11.352761  568041 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1008 18:37:11.352764  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352769  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1008 18:37:11.352772  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352775  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352787  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1008 18:37:11.352793  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1008 18:37:11.352797  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352800  568041 command_runner.go:130] >       "size": "68420934",
	I1008 18:37:11.352803  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352807  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352810  568041 command_runner.go:130] >       },
	I1008 18:37:11.352813  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352816  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352819  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352822  568041 command_runner.go:130] >     },
	I1008 18:37:11.352830  568041 command_runner.go:130] >     {
	I1008 18:37:11.352836  568041 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1008 18:37:11.352839  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352843  568041 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1008 18:37:11.352846  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352850  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352856  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1008 18:37:11.352862  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1008 18:37:11.352864  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352868  568041 command_runner.go:130] >       "size": "742080",
	I1008 18:37:11.352871  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352875  568041 command_runner.go:130] >         "value": "65535"
	I1008 18:37:11.352878  568041 command_runner.go:130] >       },
	I1008 18:37:11.352882  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352886  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352889  568041 command_runner.go:130] >       "pinned": true
	I1008 18:37:11.352893  568041 command_runner.go:130] >     }
	I1008 18:37:11.352899  568041 command_runner.go:130] >   ]
	I1008 18:37:11.352902  568041 command_runner.go:130] > }
	I1008 18:37:11.353846  568041 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:37:11.353867  568041 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:37:11.353910  568041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:37:11.383502  568041 command_runner.go:130] > {
	I1008 18:37:11.383528  568041 command_runner.go:130] >   "images": [
	I1008 18:37:11.383534  568041 command_runner.go:130] >     {
	I1008 18:37:11.383560  568041 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1008 18:37:11.383568  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383574  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1008 18:37:11.383582  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383588  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383605  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1008 18:37:11.383619  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1008 18:37:11.383625  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383635  568041 command_runner.go:130] >       "size": "87190579",
	I1008 18:37:11.383644  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383653  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.383671  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383680  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383685  568041 command_runner.go:130] >     },
	I1008 18:37:11.383691  568041 command_runner.go:130] >     {
	I1008 18:37:11.383704  568041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1008 18:37:11.383711  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383722  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1008 18:37:11.383731  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383740  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383752  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1008 18:37:11.383765  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1008 18:37:11.383771  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383775  568041 command_runner.go:130] >       "size": "1363676",
	I1008 18:37:11.383780  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383789  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.383795  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383799  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383802  568041 command_runner.go:130] >     },
	I1008 18:37:11.383805  568041 command_runner.go:130] >     {
	I1008 18:37:11.383811  568041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 18:37:11.383817  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383822  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 18:37:11.383832  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383839  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383846  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 18:37:11.383855  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 18:37:11.383858  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383862  568041 command_runner.go:130] >       "size": "31470524",
	I1008 18:37:11.383866  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383869  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.383874  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383878  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383884  568041 command_runner.go:130] >     },
	I1008 18:37:11.383887  568041 command_runner.go:130] >     {
	I1008 18:37:11.383893  568041 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1008 18:37:11.383897  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383902  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1008 18:37:11.383906  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383910  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383917  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1008 18:37:11.383933  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1008 18:37:11.383939  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383942  568041 command_runner.go:130] >       "size": "63273227",
	I1008 18:37:11.383946  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383950  568041 command_runner.go:130] >       "username": "nonroot",
	I1008 18:37:11.383956  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383962  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383965  568041 command_runner.go:130] >     },
	I1008 18:37:11.383968  568041 command_runner.go:130] >     {
	I1008 18:37:11.383974  568041 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1008 18:37:11.383978  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383982  568041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1008 18:37:11.383986  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383990  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383999  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1008 18:37:11.384178  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1008 18:37:11.384302  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384316  568041 command_runner.go:130] >       "size": "149009664",
	I1008 18:37:11.384323  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.384330  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.384336  568041 command_runner.go:130] >       },
	I1008 18:37:11.384342  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384349  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384361  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384366  568041 command_runner.go:130] >     },
	I1008 18:37:11.384371  568041 command_runner.go:130] >     {
	I1008 18:37:11.384381  568041 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1008 18:37:11.384390  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384404  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1008 18:37:11.384411  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384419  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.384441  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1008 18:37:11.384457  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1008 18:37:11.384490  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384566  568041 command_runner.go:130] >       "size": "95237600",
	I1008 18:37:11.384580  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.384593  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.384601  568041 command_runner.go:130] >       },
	I1008 18:37:11.384608  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384623  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384631  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384637  568041 command_runner.go:130] >     },
	I1008 18:37:11.384642  568041 command_runner.go:130] >     {
	I1008 18:37:11.384661  568041 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1008 18:37:11.384671  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384683  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1008 18:37:11.384689  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384695  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.384711  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1008 18:37:11.384723  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1008 18:37:11.384730  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384742  568041 command_runner.go:130] >       "size": "89437508",
	I1008 18:37:11.384748  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.384754  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.384765  568041 command_runner.go:130] >       },
	I1008 18:37:11.384771  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384777  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384783  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384792  568041 command_runner.go:130] >     },
	I1008 18:37:11.384797  568041 command_runner.go:130] >     {
	I1008 18:37:11.384806  568041 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1008 18:37:11.384812  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384824  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1008 18:37:11.384834  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384841  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.384875  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1008 18:37:11.384892  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1008 18:37:11.384903  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384910  568041 command_runner.go:130] >       "size": "92733849",
	I1008 18:37:11.384916  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.384922  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384933  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384939  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384944  568041 command_runner.go:130] >     },
	I1008 18:37:11.384950  568041 command_runner.go:130] >     {
	I1008 18:37:11.384959  568041 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1008 18:37:11.384970  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384978  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1008 18:37:11.384984  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384990  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.385006  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1008 18:37:11.385017  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1008 18:37:11.385022  568041 command_runner.go:130] >       ],
	I1008 18:37:11.385029  568041 command_runner.go:130] >       "size": "68420934",
	I1008 18:37:11.385039  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.385046  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.385051  568041 command_runner.go:130] >       },
	I1008 18:37:11.385057  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.385065  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.385071  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.385081  568041 command_runner.go:130] >     },
	I1008 18:37:11.385087  568041 command_runner.go:130] >     {
	I1008 18:37:11.385097  568041 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1008 18:37:11.385103  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.385111  568041 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1008 18:37:11.385121  568041 command_runner.go:130] >       ],
	I1008 18:37:11.385127  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.385138  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1008 18:37:11.385158  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1008 18:37:11.385163  568041 command_runner.go:130] >       ],
	I1008 18:37:11.385176  568041 command_runner.go:130] >       "size": "742080",
	I1008 18:37:11.385182  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.385189  568041 command_runner.go:130] >         "value": "65535"
	I1008 18:37:11.385194  568041 command_runner.go:130] >       },
	I1008 18:37:11.385204  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.385210  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.385215  568041 command_runner.go:130] >       "pinned": true
	I1008 18:37:11.385220  568041 command_runner.go:130] >     }
	I1008 18:37:11.385225  568041 command_runner.go:130] >   ]
	I1008 18:37:11.385229  568041 command_runner.go:130] > }
	I1008 18:37:11.385545  568041 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:37:11.385564  568041 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:37:11.385597  568041 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.31.1 crio true true} ...
	I1008 18:37:11.386241  568041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-255508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:37:11.386343  568041 ssh_runner.go:195] Run: crio config
	I1008 18:37:11.427654  568041 command_runner.go:130] ! time="2024-10-08 18:37:11.394911843Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1008 18:37:11.433425  568041 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 18:37:11.445844  568041 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 18:37:11.445869  568041 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 18:37:11.445875  568041 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 18:37:11.445878  568041 command_runner.go:130] > #
	I1008 18:37:11.445885  568041 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 18:37:11.445890  568041 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 18:37:11.445896  568041 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 18:37:11.445918  568041 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 18:37:11.445922  568041 command_runner.go:130] > # reload'.
	I1008 18:37:11.445928  568041 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 18:37:11.445937  568041 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 18:37:11.445942  568041 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 18:37:11.445948  568041 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 18:37:11.445954  568041 command_runner.go:130] > [crio]
	I1008 18:37:11.445961  568041 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 18:37:11.445966  568041 command_runner.go:130] > # containers images, in this directory.
	I1008 18:37:11.445970  568041 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1008 18:37:11.445978  568041 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 18:37:11.445988  568041 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1008 18:37:11.445995  568041 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 18:37:11.445998  568041 command_runner.go:130] > # imagestore = ""
	I1008 18:37:11.446004  568041 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 18:37:11.446010  568041 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 18:37:11.446014  568041 command_runner.go:130] > storage_driver = "overlay"
	I1008 18:37:11.446020  568041 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 18:37:11.446026  568041 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 18:37:11.446029  568041 command_runner.go:130] > storage_option = [
	I1008 18:37:11.446034  568041 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1008 18:37:11.446039  568041 command_runner.go:130] > ]
	I1008 18:37:11.446045  568041 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 18:37:11.446051  568041 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 18:37:11.446056  568041 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 18:37:11.446061  568041 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 18:37:11.446067  568041 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 18:37:11.446073  568041 command_runner.go:130] > # always happen on a node reboot
	I1008 18:37:11.446077  568041 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 18:37:11.446086  568041 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 18:37:11.446094  568041 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 18:37:11.446099  568041 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 18:37:11.446105  568041 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1008 18:37:11.446112  568041 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 18:37:11.446119  568041 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 18:37:11.446123  568041 command_runner.go:130] > # internal_wipe = true
	I1008 18:37:11.446130  568041 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 18:37:11.446135  568041 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 18:37:11.446145  568041 command_runner.go:130] > # internal_repair = false
	I1008 18:37:11.446151  568041 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 18:37:11.446157  568041 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 18:37:11.446162  568041 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 18:37:11.446167  568041 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 18:37:11.446174  568041 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 18:37:11.446178  568041 command_runner.go:130] > [crio.api]
	I1008 18:37:11.446184  568041 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 18:37:11.446191  568041 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 18:37:11.446196  568041 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 18:37:11.446200  568041 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 18:37:11.446206  568041 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 18:37:11.446211  568041 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 18:37:11.446215  568041 command_runner.go:130] > # stream_port = "0"
	I1008 18:37:11.446220  568041 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 18:37:11.446226  568041 command_runner.go:130] > # stream_enable_tls = false
	I1008 18:37:11.446231  568041 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 18:37:11.446235  568041 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 18:37:11.446240  568041 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 18:37:11.446248  568041 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1008 18:37:11.446252  568041 command_runner.go:130] > # minutes.
	I1008 18:37:11.446255  568041 command_runner.go:130] > # stream_tls_cert = ""
	I1008 18:37:11.446262  568041 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 18:37:11.446269  568041 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1008 18:37:11.446273  568041 command_runner.go:130] > # stream_tls_key = ""
	I1008 18:37:11.446281  568041 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 18:37:11.446287  568041 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 18:37:11.446306  568041 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1008 18:37:11.446312  568041 command_runner.go:130] > # stream_tls_ca = ""
	I1008 18:37:11.446332  568041 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 18:37:11.446339  568041 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1008 18:37:11.446352  568041 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 18:37:11.446359  568041 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1008 18:37:11.446369  568041 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 18:37:11.446377  568041 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 18:37:11.446382  568041 command_runner.go:130] > [crio.runtime]
	I1008 18:37:11.446389  568041 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 18:37:11.446394  568041 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 18:37:11.446400  568041 command_runner.go:130] > # "nofile=1024:2048"
	I1008 18:37:11.446406  568041 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 18:37:11.446412  568041 command_runner.go:130] > # default_ulimits = [
	I1008 18:37:11.446415  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446421  568041 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 18:37:11.446425  568041 command_runner.go:130] > # no_pivot = false
	I1008 18:37:11.446433  568041 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 18:37:11.446441  568041 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 18:37:11.446445  568041 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 18:37:11.446451  568041 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 18:37:11.446456  568041 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 18:37:11.446464  568041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 18:37:11.446469  568041 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1008 18:37:11.446475  568041 command_runner.go:130] > # Cgroup setting for conmon
	I1008 18:37:11.446482  568041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 18:37:11.446488  568041 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 18:37:11.446494  568041 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 18:37:11.446500  568041 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 18:37:11.446511  568041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 18:37:11.446516  568041 command_runner.go:130] > conmon_env = [
	I1008 18:37:11.446522  568041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1008 18:37:11.446527  568041 command_runner.go:130] > ]
	I1008 18:37:11.446532  568041 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 18:37:11.446537  568041 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 18:37:11.446544  568041 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 18:37:11.446548  568041 command_runner.go:130] > # default_env = [
	I1008 18:37:11.446553  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446558  568041 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 18:37:11.446570  568041 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 18:37:11.446576  568041 command_runner.go:130] > # selinux = false
	I1008 18:37:11.446582  568041 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 18:37:11.446588  568041 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1008 18:37:11.446595  568041 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1008 18:37:11.446599  568041 command_runner.go:130] > # seccomp_profile = ""
	I1008 18:37:11.446604  568041 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1008 18:37:11.446610  568041 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1008 18:37:11.446615  568041 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1008 18:37:11.446622  568041 command_runner.go:130] > # which might increase security.
	I1008 18:37:11.446626  568041 command_runner.go:130] > # This option is currently deprecated,
	I1008 18:37:11.446631  568041 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1008 18:37:11.446638  568041 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1008 18:37:11.446643  568041 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 18:37:11.446650  568041 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 18:37:11.446658  568041 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 18:37:11.446666  568041 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 18:37:11.446670  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.446675  568041 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 18:37:11.446682  568041 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 18:37:11.446686  568041 command_runner.go:130] > # the cgroup blockio controller.
	I1008 18:37:11.446692  568041 command_runner.go:130] > # blockio_config_file = ""
	I1008 18:37:11.446698  568041 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 18:37:11.446703  568041 command_runner.go:130] > # blockio parameters.
	I1008 18:37:11.446707  568041 command_runner.go:130] > # blockio_reload = false
	I1008 18:37:11.446713  568041 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 18:37:11.446719  568041 command_runner.go:130] > # irqbalance daemon.
	I1008 18:37:11.446724  568041 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 18:37:11.446729  568041 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 18:37:11.446736  568041 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 18:37:11.446742  568041 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 18:37:11.446749  568041 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 18:37:11.446756  568041 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 18:37:11.446769  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.446774  568041 command_runner.go:130] > # rdt_config_file = ""
	I1008 18:37:11.446779  568041 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 18:37:11.446785  568041 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1008 18:37:11.446813  568041 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 18:37:11.446820  568041 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 18:37:11.446826  568041 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 18:37:11.446834  568041 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 18:37:11.446840  568041 command_runner.go:130] > # will be added.
	I1008 18:37:11.446844  568041 command_runner.go:130] > # default_capabilities = [
	I1008 18:37:11.446847  568041 command_runner.go:130] > # 	"CHOWN",
	I1008 18:37:11.446851  568041 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 18:37:11.446854  568041 command_runner.go:130] > # 	"FSETID",
	I1008 18:37:11.446858  568041 command_runner.go:130] > # 	"FOWNER",
	I1008 18:37:11.446862  568041 command_runner.go:130] > # 	"SETGID",
	I1008 18:37:11.446865  568041 command_runner.go:130] > # 	"SETUID",
	I1008 18:37:11.446869  568041 command_runner.go:130] > # 	"SETPCAP",
	I1008 18:37:11.446872  568041 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 18:37:11.446876  568041 command_runner.go:130] > # 	"KILL",
	I1008 18:37:11.446879  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446889  568041 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 18:37:11.446897  568041 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 18:37:11.446901  568041 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 18:37:11.446909  568041 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 18:37:11.446916  568041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 18:37:11.446920  568041 command_runner.go:130] > default_sysctls = [
	I1008 18:37:11.446926  568041 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 18:37:11.446929  568041 command_runner.go:130] > ]
	I1008 18:37:11.446933  568041 command_runner.go:130] > # List of devices on the host that a
	I1008 18:37:11.446940  568041 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 18:37:11.446944  568041 command_runner.go:130] > # allowed_devices = [
	I1008 18:37:11.446947  568041 command_runner.go:130] > # 	"/dev/fuse",
	I1008 18:37:11.446950  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446977  568041 command_runner.go:130] > # List of additional devices. specified as
	I1008 18:37:11.446992  568041 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 18:37:11.447000  568041 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 18:37:11.447006  568041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 18:37:11.447012  568041 command_runner.go:130] > # additional_devices = [
	I1008 18:37:11.447015  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447020  568041 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 18:37:11.447026  568041 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 18:37:11.447029  568041 command_runner.go:130] > # 	"/etc/cdi",
	I1008 18:37:11.447033  568041 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 18:37:11.447038  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447044  568041 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 18:37:11.447052  568041 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 18:37:11.447056  568041 command_runner.go:130] > # Defaults to false.
	I1008 18:37:11.447063  568041 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 18:37:11.447068  568041 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 18:37:11.447075  568041 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 18:37:11.447078  568041 command_runner.go:130] > # hooks_dir = [
	I1008 18:37:11.447083  568041 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 18:37:11.447088  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447094  568041 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 18:37:11.447102  568041 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 18:37:11.447107  568041 command_runner.go:130] > # its default mounts from the following two files:
	I1008 18:37:11.447110  568041 command_runner.go:130] > #
	I1008 18:37:11.447116  568041 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 18:37:11.447124  568041 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 18:37:11.447129  568041 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 18:37:11.447134  568041 command_runner.go:130] > #
	I1008 18:37:11.447140  568041 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 18:37:11.447148  568041 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 18:37:11.447153  568041 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 18:37:11.447163  568041 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 18:37:11.447169  568041 command_runner.go:130] > #
	I1008 18:37:11.447177  568041 command_runner.go:130] > # default_mounts_file = ""
	I1008 18:37:11.447184  568041 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 18:37:11.447191  568041 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 18:37:11.447194  568041 command_runner.go:130] > pids_limit = 1024
	I1008 18:37:11.447200  568041 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 18:37:11.447209  568041 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 18:37:11.447215  568041 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 18:37:11.447225  568041 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 18:37:11.447229  568041 command_runner.go:130] > # log_size_max = -1
	I1008 18:37:11.447235  568041 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 18:37:11.447241  568041 command_runner.go:130] > # log_to_journald = false
	I1008 18:37:11.447247  568041 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 18:37:11.447254  568041 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 18:37:11.447260  568041 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 18:37:11.447267  568041 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 18:37:11.447272  568041 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 18:37:11.447277  568041 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 18:37:11.447282  568041 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 18:37:11.447288  568041 command_runner.go:130] > # read_only = false
	I1008 18:37:11.447293  568041 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 18:37:11.447299  568041 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 18:37:11.447305  568041 command_runner.go:130] > # live configuration reload.
	I1008 18:37:11.447309  568041 command_runner.go:130] > # log_level = "info"
	I1008 18:37:11.447314  568041 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 18:37:11.447321  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.447325  568041 command_runner.go:130] > # log_filter = ""
	I1008 18:37:11.447332  568041 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 18:37:11.447340  568041 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 18:37:11.447346  568041 command_runner.go:130] > # separated by comma.
	I1008 18:37:11.447353  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447366  568041 command_runner.go:130] > # uid_mappings = ""
	I1008 18:37:11.447376  568041 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 18:37:11.447382  568041 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 18:37:11.447393  568041 command_runner.go:130] > # separated by comma.
	I1008 18:37:11.447401  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447408  568041 command_runner.go:130] > # gid_mappings = ""
	I1008 18:37:11.447414  568041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 18:37:11.447422  568041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 18:37:11.447428  568041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 18:37:11.447437  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447441  568041 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 18:37:11.447448  568041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 18:37:11.447455  568041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 18:37:11.447461  568041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 18:37:11.447470  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447474  568041 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 18:37:11.447479  568041 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 18:37:11.447486  568041 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 18:37:11.447491  568041 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 18:37:11.447498  568041 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 18:37:11.447503  568041 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 18:37:11.447512  568041 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 18:37:11.447519  568041 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 18:37:11.447524  568041 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 18:37:11.447530  568041 command_runner.go:130] > drop_infra_ctr = false
	I1008 18:37:11.447536  568041 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 18:37:11.447544  568041 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 18:37:11.447550  568041 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 18:37:11.447556  568041 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 18:37:11.447563  568041 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 18:37:11.447569  568041 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 18:37:11.447574  568041 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 18:37:11.447582  568041 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 18:37:11.447585  568041 command_runner.go:130] > # shared_cpuset = ""
	I1008 18:37:11.447593  568041 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 18:37:11.447597  568041 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 18:37:11.447607  568041 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 18:37:11.447616  568041 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 18:37:11.447622  568041 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1008 18:37:11.447628  568041 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 18:37:11.447636  568041 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 18:37:11.447641  568041 command_runner.go:130] > # enable_criu_support = false
	I1008 18:37:11.447645  568041 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 18:37:11.447651  568041 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 18:37:11.447655  568041 command_runner.go:130] > # enable_pod_events = false
	I1008 18:37:11.447661  568041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 18:37:11.447669  568041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 18:37:11.447674  568041 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 18:37:11.447680  568041 command_runner.go:130] > # default_runtime = "runc"
	I1008 18:37:11.447685  568041 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 18:37:11.447694  568041 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 18:37:11.447702  568041 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 18:37:11.447709  568041 command_runner.go:130] > # creation as a file is not desired either.
	I1008 18:37:11.447717  568041 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 18:37:11.447724  568041 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 18:37:11.447728  568041 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 18:37:11.447731  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447737  568041 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 18:37:11.447745  568041 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 18:37:11.447751  568041 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 18:37:11.447759  568041 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 18:37:11.447763  568041 command_runner.go:130] > #
	I1008 18:37:11.447767  568041 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 18:37:11.447772  568041 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 18:37:11.447800  568041 command_runner.go:130] > # runtime_type = "oci"
	I1008 18:37:11.447807  568041 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 18:37:11.447812  568041 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 18:37:11.447818  568041 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 18:37:11.447823  568041 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 18:37:11.447834  568041 command_runner.go:130] > # monitor_env = []
	I1008 18:37:11.447841  568041 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 18:37:11.447845  568041 command_runner.go:130] > # allowed_annotations = []
	I1008 18:37:11.447850  568041 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 18:37:11.447855  568041 command_runner.go:130] > # Where:
	I1008 18:37:11.447859  568041 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 18:37:11.447865  568041 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 18:37:11.447873  568041 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 18:37:11.447879  568041 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 18:37:11.447887  568041 command_runner.go:130] > #   in $PATH.
	I1008 18:37:11.447893  568041 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 18:37:11.447897  568041 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 18:37:11.447906  568041 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 18:37:11.447912  568041 command_runner.go:130] > #   state.
	I1008 18:37:11.447918  568041 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 18:37:11.447926  568041 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 18:37:11.447932  568041 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 18:37:11.447939  568041 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 18:37:11.447945  568041 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 18:37:11.447953  568041 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 18:37:11.447958  568041 command_runner.go:130] > #   The currently recognized values are:
	I1008 18:37:11.447965  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 18:37:11.447974  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 18:37:11.447979  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 18:37:11.447988  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 18:37:11.447994  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 18:37:11.448002  568041 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 18:37:11.448009  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 18:37:11.448017  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 18:37:11.448023  568041 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 18:37:11.448031  568041 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 18:37:11.448035  568041 command_runner.go:130] > #   deprecated option "conmon".
	I1008 18:37:11.448042  568041 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 18:37:11.448049  568041 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 18:37:11.448057  568041 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 18:37:11.448062  568041 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 18:37:11.448068  568041 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1008 18:37:11.448076  568041 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 18:37:11.448081  568041 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 18:37:11.448087  568041 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 18:37:11.448090  568041 command_runner.go:130] > #
	I1008 18:37:11.448097  568041 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 18:37:11.448105  568041 command_runner.go:130] > #
	I1008 18:37:11.448111  568041 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 18:37:11.448119  568041 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 18:37:11.448123  568041 command_runner.go:130] > #
	I1008 18:37:11.448128  568041 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 18:37:11.448134  568041 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 18:37:11.448140  568041 command_runner.go:130] > #
	I1008 18:37:11.448146  568041 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 18:37:11.448150  568041 command_runner.go:130] > # feature.
	I1008 18:37:11.448155  568041 command_runner.go:130] > #
	I1008 18:37:11.448160  568041 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 18:37:11.448166  568041 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 18:37:11.448173  568041 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 18:37:11.448179  568041 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 18:37:11.448187  568041 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 18:37:11.448190  568041 command_runner.go:130] > #
	I1008 18:37:11.448195  568041 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 18:37:11.448201  568041 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 18:37:11.448205  568041 command_runner.go:130] > #
	I1008 18:37:11.448210  568041 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 18:37:11.448218  568041 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 18:37:11.448221  568041 command_runner.go:130] > #
	I1008 18:37:11.448226  568041 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 18:37:11.448234  568041 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 18:37:11.448241  568041 command_runner.go:130] > # limitation.
	I1008 18:37:11.448249  568041 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 18:37:11.448253  568041 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1008 18:37:11.448258  568041 command_runner.go:130] > runtime_type = "oci"
	I1008 18:37:11.448263  568041 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 18:37:11.448269  568041 command_runner.go:130] > runtime_config_path = ""
	I1008 18:37:11.448273  568041 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 18:37:11.448278  568041 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 18:37:11.448283  568041 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 18:37:11.448288  568041 command_runner.go:130] > monitor_env = [
	I1008 18:37:11.448293  568041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1008 18:37:11.448298  568041 command_runner.go:130] > ]
	I1008 18:37:11.448303  568041 command_runner.go:130] > privileged_without_host_devices = false
	I1008 18:37:11.448309  568041 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 18:37:11.448316  568041 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 18:37:11.448322  568041 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 18:37:11.448331  568041 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 18:37:11.448340  568041 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1008 18:37:11.448347  568041 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 18:37:11.448356  568041 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 18:37:11.448366  568041 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 18:37:11.448373  568041 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 18:37:11.448379  568041 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 18:37:11.448385  568041 command_runner.go:130] > # Example:
	I1008 18:37:11.448389  568041 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 18:37:11.448393  568041 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 18:37:11.448398  568041 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 18:37:11.448405  568041 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 18:37:11.448409  568041 command_runner.go:130] > # cpuset = 0
	I1008 18:37:11.448415  568041 command_runner.go:130] > # cpushares = "0-1"
	I1008 18:37:11.448419  568041 command_runner.go:130] > # Where:
	I1008 18:37:11.448425  568041 command_runner.go:130] > # The workload name is workload-type.
	I1008 18:37:11.448431  568041 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 18:37:11.448443  568041 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 18:37:11.448450  568041 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 18:37:11.448457  568041 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 18:37:11.448465  568041 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 18:37:11.448470  568041 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 18:37:11.448478  568041 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 18:37:11.448483  568041 command_runner.go:130] > # Default value is set to true
	I1008 18:37:11.448489  568041 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 18:37:11.448494  568041 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 18:37:11.448501  568041 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 18:37:11.448505  568041 command_runner.go:130] > # Default value is set to 'false'
	I1008 18:37:11.448521  568041 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 18:37:11.448527  568041 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 18:37:11.448530  568041 command_runner.go:130] > #
	I1008 18:37:11.448535  568041 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 18:37:11.448540  568041 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1008 18:37:11.448546  568041 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1008 18:37:11.448552  568041 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1008 18:37:11.448557  568041 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1008 18:37:11.448563  568041 command_runner.go:130] > [crio.image]
	I1008 18:37:11.448568  568041 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 18:37:11.448572  568041 command_runner.go:130] > # default_transport = "docker://"
	I1008 18:37:11.448577  568041 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 18:37:11.448583  568041 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 18:37:11.448586  568041 command_runner.go:130] > # global_auth_file = ""
	I1008 18:37:11.448591  568041 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 18:37:11.448595  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.448600  568041 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1008 18:37:11.448606  568041 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 18:37:11.448611  568041 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 18:37:11.448615  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.448619  568041 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 18:37:11.448624  568041 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 18:37:11.448632  568041 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 18:37:11.448637  568041 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 18:37:11.448642  568041 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 18:37:11.448646  568041 command_runner.go:130] > # pause_command = "/pause"
	I1008 18:37:11.448651  568041 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 18:37:11.448657  568041 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 18:37:11.448662  568041 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 18:37:11.448669  568041 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 18:37:11.448674  568041 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 18:37:11.448679  568041 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 18:37:11.448682  568041 command_runner.go:130] > # pinned_images = [
	I1008 18:37:11.448685  568041 command_runner.go:130] > # ]
	I1008 18:37:11.448691  568041 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 18:37:11.448697  568041 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 18:37:11.448705  568041 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 18:37:11.448712  568041 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 18:37:11.448716  568041 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 18:37:11.448720  568041 command_runner.go:130] > # signature_policy = ""
	I1008 18:37:11.448725  568041 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 18:37:11.448731  568041 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 18:37:11.448736  568041 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 18:37:11.448744  568041 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 18:37:11.448749  568041 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 18:37:11.448754  568041 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 18:37:11.448762  568041 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 18:37:11.448768  568041 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 18:37:11.448772  568041 command_runner.go:130] > # changing them here.
	I1008 18:37:11.448779  568041 command_runner.go:130] > # insecure_registries = [
	I1008 18:37:11.448783  568041 command_runner.go:130] > # ]
	I1008 18:37:11.448788  568041 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 18:37:11.448794  568041 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 18:37:11.448798  568041 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 18:37:11.448806  568041 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 18:37:11.448812  568041 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 18:37:11.448821  568041 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 18:37:11.448824  568041 command_runner.go:130] > # CNI plugins.
	I1008 18:37:11.448829  568041 command_runner.go:130] > [crio.network]
	I1008 18:37:11.448835  568041 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 18:37:11.448842  568041 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 18:37:11.448846  568041 command_runner.go:130] > # cni_default_network = ""
	I1008 18:37:11.448853  568041 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 18:37:11.448858  568041 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 18:37:11.448864  568041 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 18:37:11.448869  568041 command_runner.go:130] > # plugin_dirs = [
	I1008 18:37:11.448873  568041 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 18:37:11.448876  568041 command_runner.go:130] > # ]
	I1008 18:37:11.448882  568041 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 18:37:11.448886  568041 command_runner.go:130] > [crio.metrics]
	I1008 18:37:11.448890  568041 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 18:37:11.448896  568041 command_runner.go:130] > enable_metrics = true
	I1008 18:37:11.448901  568041 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 18:37:11.448907  568041 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 18:37:11.448913  568041 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 18:37:11.448922  568041 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 18:37:11.448930  568041 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 18:37:11.448934  568041 command_runner.go:130] > # metrics_collectors = [
	I1008 18:37:11.448939  568041 command_runner.go:130] > # 	"operations",
	I1008 18:37:11.448944  568041 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1008 18:37:11.448948  568041 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1008 18:37:11.448954  568041 command_runner.go:130] > # 	"operations_errors",
	I1008 18:37:11.448958  568041 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1008 18:37:11.448964  568041 command_runner.go:130] > # 	"image_pulls_by_name",
	I1008 18:37:11.448968  568041 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1008 18:37:11.448974  568041 command_runner.go:130] > # 	"image_pulls_failures",
	I1008 18:37:11.448980  568041 command_runner.go:130] > # 	"image_pulls_successes",
	I1008 18:37:11.448984  568041 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 18:37:11.448990  568041 command_runner.go:130] > # 	"image_layer_reuse",
	I1008 18:37:11.448996  568041 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 18:37:11.449000  568041 command_runner.go:130] > # 	"containers_oom_total",
	I1008 18:37:11.449004  568041 command_runner.go:130] > # 	"containers_oom",
	I1008 18:37:11.449008  568041 command_runner.go:130] > # 	"processes_defunct",
	I1008 18:37:11.449014  568041 command_runner.go:130] > # 	"operations_total",
	I1008 18:37:11.449017  568041 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 18:37:11.449022  568041 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 18:37:11.449026  568041 command_runner.go:130] > # 	"operations_errors_total",
	I1008 18:37:11.449030  568041 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 18:37:11.449035  568041 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 18:37:11.449039  568041 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 18:37:11.449043  568041 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 18:37:11.449049  568041 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 18:37:11.449053  568041 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 18:37:11.449057  568041 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 18:37:11.449063  568041 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 18:37:11.449067  568041 command_runner.go:130] > # ]
	I1008 18:37:11.449072  568041 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 18:37:11.449077  568041 command_runner.go:130] > # metrics_port = 9090
	I1008 18:37:11.449082  568041 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 18:37:11.449086  568041 command_runner.go:130] > # metrics_socket = ""
	I1008 18:37:11.449091  568041 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 18:37:11.449098  568041 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 18:37:11.449104  568041 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 18:37:11.449111  568041 command_runner.go:130] > # certificate on any modification event.
	I1008 18:37:11.449114  568041 command_runner.go:130] > # metrics_cert = ""
	I1008 18:37:11.449119  568041 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 18:37:11.449126  568041 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 18:37:11.449130  568041 command_runner.go:130] > # metrics_key = ""
	I1008 18:37:11.449137  568041 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 18:37:11.449141  568041 command_runner.go:130] > [crio.tracing]
	I1008 18:37:11.449146  568041 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 18:37:11.449151  568041 command_runner.go:130] > # enable_tracing = false
	I1008 18:37:11.449157  568041 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 18:37:11.449163  568041 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1008 18:37:11.449170  568041 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 18:37:11.449176  568041 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 18:37:11.449180  568041 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 18:37:11.449185  568041 command_runner.go:130] > [crio.nri]
	I1008 18:37:11.449189  568041 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 18:37:11.449195  568041 command_runner.go:130] > # enable_nri = false
	I1008 18:37:11.449201  568041 command_runner.go:130] > # NRI socket to listen on.
	I1008 18:37:11.449207  568041 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 18:37:11.449212  568041 command_runner.go:130] > # NRI plugin directory to use.
	I1008 18:37:11.449216  568041 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 18:37:11.449221  568041 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 18:37:11.449228  568041 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 18:37:11.449233  568041 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 18:37:11.449238  568041 command_runner.go:130] > # nri_disable_connections = false
	I1008 18:37:11.449245  568041 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 18:37:11.449250  568041 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 18:37:11.449257  568041 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 18:37:11.449261  568041 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 18:37:11.449269  568041 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 18:37:11.449273  568041 command_runner.go:130] > [crio.stats]
	I1008 18:37:11.449280  568041 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 18:37:11.449286  568041 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 18:37:11.449290  568041 command_runner.go:130] > # stats_collection_period = 0
	I1008 18:37:11.449362  568041 cni.go:84] Creating CNI manager for ""
	I1008 18:37:11.449373  568041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1008 18:37:11.449392  568041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:37:11.449415  568041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-255508 NodeName:multinode-255508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:37:11.449562  568041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-255508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:37:11.449637  568041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:37:11.460294  568041 command_runner.go:130] > kubeadm
	I1008 18:37:11.460318  568041 command_runner.go:130] > kubectl
	I1008 18:37:11.460322  568041 command_runner.go:130] > kubelet
	I1008 18:37:11.460684  568041 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:37:11.460750  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:37:11.469759  568041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1008 18:37:11.485750  568041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:37:11.501258  568041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1008 18:37:11.517424  568041 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I1008 18:37:11.521128  568041 command_runner.go:130] > 192.168.39.43	control-plane.minikube.internal
	I1008 18:37:11.521191  568041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:37:11.661762  568041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:37:11.675953  568041 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508 for IP: 192.168.39.43
	I1008 18:37:11.675974  568041 certs.go:194] generating shared ca certs ...
	I1008 18:37:11.675992  568041 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:37:11.676168  568041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:37:11.676207  568041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:37:11.676217  568041 certs.go:256] generating profile certs ...
	I1008 18:37:11.676294  568041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/client.key
	I1008 18:37:11.676345  568041 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.key.a701f6f9
	I1008 18:37:11.676392  568041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.key
	I1008 18:37:11.676403  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:37:11.676419  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:37:11.676431  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:37:11.676443  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:37:11.676456  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:37:11.676468  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:37:11.676480  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:37:11.676492  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:37:11.676542  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:37:11.676569  568041 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:37:11.676577  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:37:11.676600  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:37:11.676626  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:37:11.676646  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:37:11.676682  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:37:11.676707  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:37:11.676724  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.676741  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.677320  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:37:11.701099  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:37:11.724839  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:37:11.748617  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:37:11.772150  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 18:37:11.795325  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:37:11.819052  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:37:11.842130  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:37:11.865438  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:37:11.888085  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:37:11.911184  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:37:11.934370  568041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:37:11.950108  568041 ssh_runner.go:195] Run: openssl version
	I1008 18:37:11.955596  568041 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1008 18:37:11.955690  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:37:11.966034  568041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.970050  568041 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.970307  568041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.970359  568041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.975488  568041 command_runner.go:130] > b5213941
	I1008 18:37:11.975573  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:37:11.984017  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:37:11.993799  568041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.997726  568041 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.997867  568041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.997899  568041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:37:12.003163  568041 command_runner.go:130] > 51391683
	I1008 18:37:12.003224  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:37:12.011824  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:37:12.022027  568041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.026103  568041 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.026145  568041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.026175  568041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.031707  568041 command_runner.go:130] > 3ec20f2e
	I1008 18:37:12.031770  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:37:12.040534  568041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:37:12.044671  568041 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:37:12.044691  568041 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 18:37:12.044698  568041 command_runner.go:130] > Device: 253,1	Inode: 1054760     Links: 1
	I1008 18:37:12.044704  568041 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 18:37:12.044709  568041 command_runner.go:130] > Access: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044717  568041 command_runner.go:130] > Modify: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044729  568041 command_runner.go:130] > Change: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044740  568041 command_runner.go:130] >  Birth: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044795  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:37:12.050040  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.050105  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:37:12.055285  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.055336  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:37:12.060813  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.060925  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:37:12.065885  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.066034  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:37:12.071334  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.071398  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:37:12.076390  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.076554  568041 kubeadm.go:392] StartCluster: {Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:37:12.076691  568041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:37:12.076729  568041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:37:12.116861  568041 command_runner.go:130] > 672db4e9258153019940a867d1ad7d2253520762b6f23667dc5f5ef6e45d9318
	I1008 18:37:12.116895  568041 command_runner.go:130] > dbb17614f252c7bcbb0b8617e0310a7180ecd542750142b96cc63ae40345bd27
	I1008 18:37:12.116904  568041 command_runner.go:130] > 741cf09d69c22d616c5d54ab640f3f0d2229986097f1709c9a7cd52a92adbf8c
	I1008 18:37:12.116914  568041 command_runner.go:130] > c7c3519a922cdc33a9c9d911b58ba912091793679ccef944c75e4701cad7817f
	I1008 18:37:12.116923  568041 command_runner.go:130] > 6c1c60b60438057fd01ceecf74b3b223b69a378532b6ab5692e09a954c28569a
	I1008 18:37:12.116932  568041 command_runner.go:130] > 042f2bb068a141f95a10c6f223bdd18c22923616806263786c49a5cbee04d328
	I1008 18:37:12.116940  568041 command_runner.go:130] > 694038df9e668a5e55f19956048aab8d5a860b9b011446b24779138d4859b105
	I1008 18:37:12.116953  568041 command_runner.go:130] > 0cb8bb904b7b859112685b06aa32674e1f0fdeb6f1c9b970e6369d9988d9c74d
	I1008 18:37:12.116978  568041 cri.go:89] found id: "672db4e9258153019940a867d1ad7d2253520762b6f23667dc5f5ef6e45d9318"
	I1008 18:37:12.116987  568041 cri.go:89] found id: "dbb17614f252c7bcbb0b8617e0310a7180ecd542750142b96cc63ae40345bd27"
	I1008 18:37:12.116990  568041 cri.go:89] found id: "741cf09d69c22d616c5d54ab640f3f0d2229986097f1709c9a7cd52a92adbf8c"
	I1008 18:37:12.116994  568041 cri.go:89] found id: "c7c3519a922cdc33a9c9d911b58ba912091793679ccef944c75e4701cad7817f"
	I1008 18:37:12.116997  568041 cri.go:89] found id: "6c1c60b60438057fd01ceecf74b3b223b69a378532b6ab5692e09a954c28569a"
	I1008 18:37:12.117003  568041 cri.go:89] found id: "042f2bb068a141f95a10c6f223bdd18c22923616806263786c49a5cbee04d328"
	I1008 18:37:12.117006  568041 cri.go:89] found id: "694038df9e668a5e55f19956048aab8d5a860b9b011446b24779138d4859b105"
	I1008 18:37:12.117009  568041 cri.go:89] found id: "0cb8bb904b7b859112685b06aa32674e1f0fdeb6f1c9b970e6369d9988d9c74d"
	I1008 18:37:12.117011  568041 cri.go:89] found id: ""
	I1008 18:37:12.117050  568041 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-255508 -n multinode-255508
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-255508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (317.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 stop
E1008 18:40:51.766580  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-255508 stop: exit status 82 (2m0.460156951s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-255508-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-255508 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 status: (18.791711966s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr: (3.359590699s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-255508 -n multinode-255508
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 logs -n 25: (1.873291205s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508:/home/docker/cp-test_multinode-255508-m02_multinode-255508.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508 sudo cat                                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m02_multinode-255508.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03:/home/docker/cp-test_multinode-255508-m02_multinode-255508-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508-m03 sudo cat                                   | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m02_multinode-255508-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp testdata/cp-test.txt                                                | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile543339778/001/cp-test_multinode-255508-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508:/home/docker/cp-test_multinode-255508-m03_multinode-255508.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508 sudo cat                                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m03_multinode-255508.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt                       | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m02:/home/docker/cp-test_multinode-255508-m03_multinode-255508-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n                                                                 | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | multinode-255508-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-255508 ssh -n multinode-255508-m02 sudo cat                                   | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	|         | /home/docker/cp-test_multinode-255508-m03_multinode-255508-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-255508 node stop m03                                                          | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:32 UTC |
	| node    | multinode-255508 node start                                                             | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:32 UTC | 08 Oct 24 18:33 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-255508                                                                | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:33 UTC |                     |
	| stop    | -p multinode-255508                                                                     | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:33 UTC |                     |
	| start   | -p multinode-255508                                                                     | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:35 UTC | 08 Oct 24 18:38 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-255508                                                                | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:38 UTC |                     |
	| node    | multinode-255508 node delete                                                            | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:38 UTC | 08 Oct 24 18:38 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-255508 stop                                                                   | multinode-255508 | jenkins | v1.34.0 | 08 Oct 24 18:38 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:35:38
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:35:38.050074  568041 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:35:38.050173  568041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:35:38.050181  568041 out.go:358] Setting ErrFile to fd 2...
	I1008 18:35:38.050184  568041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:35:38.050401  568041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:35:38.050928  568041 out.go:352] Setting JSON to false
	I1008 18:35:38.051885  568041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8290,"bootTime":1728404248,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:35:38.051982  568041 start.go:139] virtualization: kvm guest
	I1008 18:35:38.055003  568041 out.go:177] * [multinode-255508] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:35:38.056363  568041 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:35:38.056443  568041 notify.go:220] Checking for updates...
	I1008 18:35:38.058769  568041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:35:38.059994  568041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:35:38.061132  568041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:35:38.062376  568041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:35:38.063484  568041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:35:38.064918  568041 config.go:182] Loaded profile config "multinode-255508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:35:38.065012  568041 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:35:38.065454  568041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:35:38.065538  568041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:35:38.080667  568041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1008 18:35:38.081169  568041 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:35:38.081796  568041 main.go:141] libmachine: Using API Version  1
	I1008 18:35:38.081820  568041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:35:38.082199  568041 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:35:38.082396  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:35:38.116329  568041 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:35:38.117402  568041 start.go:297] selected driver: kvm2
	I1008 18:35:38.117414  568041 start.go:901] validating driver "kvm2" against &{Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:35:38.117561  568041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:35:38.117891  568041 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:35:38.117966  568041 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:35:38.132625  568041 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:35:38.133354  568041 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:35:38.133388  568041 cni.go:84] Creating CNI manager for ""
	I1008 18:35:38.133448  568041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1008 18:35:38.133523  568041 start.go:340] cluster config:
	{Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-255508 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:35:38.133693  568041 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:35:38.135213  568041 out.go:177] * Starting "multinode-255508" primary control-plane node in "multinode-255508" cluster
	I1008 18:35:38.136253  568041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:35:38.136291  568041 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:35:38.136304  568041 cache.go:56] Caching tarball of preloaded images
	I1008 18:35:38.136386  568041 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:35:38.136401  568041 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:35:38.136546  568041 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/config.json ...
	I1008 18:35:38.136764  568041 start.go:360] acquireMachinesLock for multinode-255508: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:35:38.136810  568041 start.go:364] duration metric: took 25.242µs to acquireMachinesLock for "multinode-255508"
	I1008 18:35:38.136830  568041 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:35:38.136839  568041 fix.go:54] fixHost starting: 
	I1008 18:35:38.137120  568041 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:35:38.137158  568041 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:35:38.150937  568041 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I1008 18:35:38.151507  568041 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:35:38.152041  568041 main.go:141] libmachine: Using API Version  1
	I1008 18:35:38.152071  568041 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:35:38.152414  568041 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:35:38.152626  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:35:38.152778  568041 main.go:141] libmachine: (multinode-255508) Calling .GetState
	I1008 18:35:38.154149  568041 fix.go:112] recreateIfNeeded on multinode-255508: state=Running err=<nil>
	W1008 18:35:38.154169  568041 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:35:38.156449  568041 out.go:177] * Updating the running kvm2 "multinode-255508" VM ...
	I1008 18:35:38.157704  568041 machine.go:93] provisionDockerMachine start ...
	I1008 18:35:38.157724  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:35:38.157912  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.160350  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.160775  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.160817  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.160897  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.161059  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.161213  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.161306  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.161444  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.161650  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.161661  568041 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:35:38.264229  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-255508
	
	I1008 18:35:38.264264  568041 main.go:141] libmachine: (multinode-255508) Calling .GetMachineName
	I1008 18:35:38.264486  568041 buildroot.go:166] provisioning hostname "multinode-255508"
	I1008 18:35:38.264530  568041 main.go:141] libmachine: (multinode-255508) Calling .GetMachineName
	I1008 18:35:38.264730  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.267490  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.267888  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.267920  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.268056  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.268249  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.268398  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.268536  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.268669  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.268864  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.268881  568041 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-255508 && echo "multinode-255508" | sudo tee /etc/hostname
	I1008 18:35:38.384761  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-255508
	
	I1008 18:35:38.384798  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.387458  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.387798  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.387834  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.387958  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.388133  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.388295  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.388467  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.388614  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.388787  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.388808  568041 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-255508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-255508/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-255508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:35:38.487280  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:35:38.487316  568041 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:35:38.487359  568041 buildroot.go:174] setting up certificates
	I1008 18:35:38.487376  568041 provision.go:84] configureAuth start
	I1008 18:35:38.487395  568041 main.go:141] libmachine: (multinode-255508) Calling .GetMachineName
	I1008 18:35:38.487679  568041 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:35:38.489997  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.490424  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.490449  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.490518  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.492594  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.492886  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.492921  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.493024  568041 provision.go:143] copyHostCerts
	I1008 18:35:38.493078  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:35:38.493130  568041 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:35:38.493142  568041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:35:38.493218  568041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:35:38.493342  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:35:38.493371  568041 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:35:38.493380  568041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:35:38.493424  568041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:35:38.493542  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:35:38.493572  568041 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:35:38.493578  568041 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:35:38.493611  568041 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:35:38.493684  568041 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.multinode-255508 san=[127.0.0.1 192.168.39.43 localhost minikube multinode-255508]
	I1008 18:35:38.654473  568041 provision.go:177] copyRemoteCerts
	I1008 18:35:38.654542  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:35:38.654569  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.656972  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.657274  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.657305  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.657431  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.657623  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.657795  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.657946  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:35:38.736797  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 18:35:38.736859  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1008 18:35:38.760171  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 18:35:38.760234  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:35:38.783571  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 18:35:38.783638  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:35:38.805869  568041 provision.go:87] duration metric: took 318.478434ms to configureAuth
	I1008 18:35:38.805896  568041 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:35:38.806113  568041 config.go:182] Loaded profile config "multinode-255508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:35:38.806204  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:35:38.808638  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.808994  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:35:38.809027  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:35:38.809173  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:35:38.809354  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.809528  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:35:38.809671  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:35:38.809806  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:35:38.810010  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:35:38.810032  568041 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:37:09.540080  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:37:09.540112  568041 machine.go:96] duration metric: took 1m31.382394022s to provisionDockerMachine
	I1008 18:37:09.540127  568041 start.go:293] postStartSetup for "multinode-255508" (driver="kvm2")
	I1008 18:37:09.540146  568041 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:37:09.540201  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.540587  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:37:09.540655  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.544021  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.544464  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.544497  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.544687  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.544867  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.545021  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.545173  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:37:09.626568  568041 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:37:09.630842  568041 command_runner.go:130] > NAME=Buildroot
	I1008 18:37:09.630864  568041 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1008 18:37:09.630868  568041 command_runner.go:130] > ID=buildroot
	I1008 18:37:09.630875  568041 command_runner.go:130] > VERSION_ID=2023.02.9
	I1008 18:37:09.630881  568041 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1008 18:37:09.630921  568041 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:37:09.630938  568041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:37:09.631013  568041 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:37:09.631109  568041 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:37:09.631121  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /etc/ssl/certs/5370132.pem
	I1008 18:37:09.631242  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:37:09.640932  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:37:09.663866  568041 start.go:296] duration metric: took 123.725256ms for postStartSetup
	I1008 18:37:09.663910  568041 fix.go:56] duration metric: took 1m31.527071073s for fixHost
	I1008 18:37:09.663932  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.666562  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.666937  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.666969  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.667189  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.667371  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.667561  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.667702  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.667866  568041 main.go:141] libmachine: Using SSH client type: native
	I1008 18:37:09.668057  568041 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I1008 18:37:09.668070  568041 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:37:09.766935  568041 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728412629.734144499
	
	I1008 18:37:09.766962  568041 fix.go:216] guest clock: 1728412629.734144499
	I1008 18:37:09.766970  568041 fix.go:229] Guest: 2024-10-08 18:37:09.734144499 +0000 UTC Remote: 2024-10-08 18:37:09.663914553 +0000 UTC m=+91.653762432 (delta=70.229946ms)
	I1008 18:37:09.767020  568041 fix.go:200] guest clock delta is within tolerance: 70.229946ms
	I1008 18:37:09.767028  568041 start.go:83] releasing machines lock for "multinode-255508", held for 1m31.630206079s
	I1008 18:37:09.767068  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.767392  568041 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:37:09.769902  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.770343  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.770375  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.770490  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.771067  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.771258  568041 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:37:09.771370  568041 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:37:09.771417  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.771464  568041 ssh_runner.go:195] Run: cat /version.json
	I1008 18:37:09.771487  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:37:09.774064  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774214  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774517  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.774543  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774622  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.774784  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.774806  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:09.774833  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:09.774956  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:37:09.775040  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.775055  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:37:09.775155  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:37:09.775202  568041 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:37:09.775317  568041 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:37:09.846442  568041 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1008 18:37:09.846742  568041 ssh_runner.go:195] Run: systemctl --version
	I1008 18:37:09.871718  568041 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1008 18:37:09.872364  568041 command_runner.go:130] > systemd 252 (252)
	I1008 18:37:09.872398  568041 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1008 18:37:09.872459  568041 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:37:10.030760  568041 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1008 18:37:10.036955  568041 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1008 18:37:10.037029  568041 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:37:10.037089  568041 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:37:10.047110  568041 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:37:10.047137  568041 start.go:495] detecting cgroup driver to use...
	I1008 18:37:10.047209  568041 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:37:10.063590  568041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:37:10.077002  568041 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:37:10.077053  568041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:37:10.090443  568041 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:37:10.103616  568041 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:37:10.248001  568041 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:37:10.400774  568041 docker.go:233] disabling docker service ...
	I1008 18:37:10.400860  568041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:37:10.424900  568041 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:37:10.438878  568041 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:37:10.585134  568041 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:37:10.727825  568041 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:37:10.742222  568041 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:37:10.760525  568041 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1008 18:37:10.760780  568041 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:37:10.760869  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.771750  568041 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:37:10.771831  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.782649  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.793535  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.805254  568041 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:37:10.816830  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.827842  568041 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.838510  568041 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:37:10.849975  568041 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:37:10.859680  568041 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1008 18:37:10.859745  568041 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:37:10.869585  568041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:37:11.002387  568041 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:37:11.195529  568041 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:37:11.195613  568041 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:37:11.200275  568041 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1008 18:37:11.200305  568041 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1008 18:37:11.200314  568041 command_runner.go:130] > Device: 0,22	Inode: 1269        Links: 1
	I1008 18:37:11.200325  568041 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 18:37:11.200333  568041 command_runner.go:130] > Access: 2024-10-08 18:37:11.138923277 +0000
	I1008 18:37:11.200345  568041 command_runner.go:130] > Modify: 2024-10-08 18:37:11.063920996 +0000
	I1008 18:37:11.200353  568041 command_runner.go:130] > Change: 2024-10-08 18:37:11.063920996 +0000
	I1008 18:37:11.200362  568041 command_runner.go:130] >  Birth: -
	I1008 18:37:11.200425  568041 start.go:563] Will wait 60s for crictl version
	I1008 18:37:11.200487  568041 ssh_runner.go:195] Run: which crictl
	I1008 18:37:11.204040  568041 command_runner.go:130] > /usr/bin/crictl
	I1008 18:37:11.204199  568041 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:37:11.244185  568041 command_runner.go:130] > Version:  0.1.0
	I1008 18:37:11.244215  568041 command_runner.go:130] > RuntimeName:  cri-o
	I1008 18:37:11.244236  568041 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1008 18:37:11.244245  568041 command_runner.go:130] > RuntimeApiVersion:  v1
	I1008 18:37:11.244269  568041 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:37:11.244369  568041 ssh_runner.go:195] Run: crio --version
	I1008 18:37:11.271722  568041 command_runner.go:130] > crio version 1.29.1
	I1008 18:37:11.271750  568041 command_runner.go:130] > Version:        1.29.1
	I1008 18:37:11.271756  568041 command_runner.go:130] > GitCommit:      unknown
	I1008 18:37:11.271760  568041 command_runner.go:130] > GitCommitDate:  unknown
	I1008 18:37:11.271763  568041 command_runner.go:130] > GitTreeState:   clean
	I1008 18:37:11.271769  568041 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1008 18:37:11.271773  568041 command_runner.go:130] > GoVersion:      go1.21.6
	I1008 18:37:11.271776  568041 command_runner.go:130] > Compiler:       gc
	I1008 18:37:11.271781  568041 command_runner.go:130] > Platform:       linux/amd64
	I1008 18:37:11.271784  568041 command_runner.go:130] > Linkmode:       dynamic
	I1008 18:37:11.271788  568041 command_runner.go:130] > BuildTags:      
	I1008 18:37:11.271792  568041 command_runner.go:130] >   containers_image_ostree_stub
	I1008 18:37:11.271796  568041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1008 18:37:11.271799  568041 command_runner.go:130] >   btrfs_noversion
	I1008 18:37:11.271804  568041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1008 18:37:11.271808  568041 command_runner.go:130] >   libdm_no_deferred_remove
	I1008 18:37:11.271811  568041 command_runner.go:130] >   seccomp
	I1008 18:37:11.271815  568041 command_runner.go:130] > LDFlags:          unknown
	I1008 18:37:11.271819  568041 command_runner.go:130] > SeccompEnabled:   true
	I1008 18:37:11.271822  568041 command_runner.go:130] > AppArmorEnabled:  false
	I1008 18:37:11.272800  568041 ssh_runner.go:195] Run: crio --version
	I1008 18:37:11.299149  568041 command_runner.go:130] > crio version 1.29.1
	I1008 18:37:11.299177  568041 command_runner.go:130] > Version:        1.29.1
	I1008 18:37:11.299185  568041 command_runner.go:130] > GitCommit:      unknown
	I1008 18:37:11.299192  568041 command_runner.go:130] > GitCommitDate:  unknown
	I1008 18:37:11.299199  568041 command_runner.go:130] > GitTreeState:   clean
	I1008 18:37:11.299208  568041 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1008 18:37:11.299215  568041 command_runner.go:130] > GoVersion:      go1.21.6
	I1008 18:37:11.299221  568041 command_runner.go:130] > Compiler:       gc
	I1008 18:37:11.299229  568041 command_runner.go:130] > Platform:       linux/amd64
	I1008 18:37:11.299238  568041 command_runner.go:130] > Linkmode:       dynamic
	I1008 18:37:11.299248  568041 command_runner.go:130] > BuildTags:      
	I1008 18:37:11.299266  568041 command_runner.go:130] >   containers_image_ostree_stub
	I1008 18:37:11.299273  568041 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1008 18:37:11.299280  568041 command_runner.go:130] >   btrfs_noversion
	I1008 18:37:11.299288  568041 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1008 18:37:11.299298  568041 command_runner.go:130] >   libdm_no_deferred_remove
	I1008 18:37:11.299304  568041 command_runner.go:130] >   seccomp
	I1008 18:37:11.299312  568041 command_runner.go:130] > LDFlags:          unknown
	I1008 18:37:11.299318  568041 command_runner.go:130] > SeccompEnabled:   true
	I1008 18:37:11.299326  568041 command_runner.go:130] > AppArmorEnabled:  false
	I1008 18:37:11.304853  568041 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:37:11.306134  568041 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:37:11.309068  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:11.309461  568041 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:37:11.309491  568041 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:37:11.309665  568041 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:37:11.313580  568041 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1008 18:37:11.313753  568041 kubeadm.go:883] updating cluster {Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:37:11.313894  568041 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:37:11.313935  568041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:37:11.352028  568041 command_runner.go:130] > {
	I1008 18:37:11.352049  568041 command_runner.go:130] >   "images": [
	I1008 18:37:11.352060  568041 command_runner.go:130] >     {
	I1008 18:37:11.352071  568041 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1008 18:37:11.352076  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352081  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1008 18:37:11.352085  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352088  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352096  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1008 18:37:11.352109  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1008 18:37:11.352113  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352117  568041 command_runner.go:130] >       "size": "87190579",
	I1008 18:37:11.352120  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352124  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352132  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352136  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352139  568041 command_runner.go:130] >     },
	I1008 18:37:11.352143  568041 command_runner.go:130] >     {
	I1008 18:37:11.352151  568041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1008 18:37:11.352155  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352160  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1008 18:37:11.352165  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352169  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352176  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1008 18:37:11.352182  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1008 18:37:11.352187  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352190  568041 command_runner.go:130] >       "size": "1363676",
	I1008 18:37:11.352194  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352203  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352209  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352212  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352215  568041 command_runner.go:130] >     },
	I1008 18:37:11.352219  568041 command_runner.go:130] >     {
	I1008 18:37:11.352224  568041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 18:37:11.352229  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352237  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 18:37:11.352243  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352247  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352254  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 18:37:11.352263  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 18:37:11.352266  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352273  568041 command_runner.go:130] >       "size": "31470524",
	I1008 18:37:11.352276  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352280  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352286  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352290  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352295  568041 command_runner.go:130] >     },
	I1008 18:37:11.352303  568041 command_runner.go:130] >     {
	I1008 18:37:11.352311  568041 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1008 18:37:11.352318  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352325  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1008 18:37:11.352328  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352332  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352341  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1008 18:37:11.352357  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1008 18:37:11.352363  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352367  568041 command_runner.go:130] >       "size": "63273227",
	I1008 18:37:11.352370  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352375  568041 command_runner.go:130] >       "username": "nonroot",
	I1008 18:37:11.352379  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352385  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352388  568041 command_runner.go:130] >     },
	I1008 18:37:11.352393  568041 command_runner.go:130] >     {
	I1008 18:37:11.352399  568041 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1008 18:37:11.352405  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352410  568041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1008 18:37:11.352415  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352419  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352432  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1008 18:37:11.352441  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1008 18:37:11.352446  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352450  568041 command_runner.go:130] >       "size": "149009664",
	I1008 18:37:11.352456  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352460  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352463  568041 command_runner.go:130] >       },
	I1008 18:37:11.352466  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352471  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352475  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352481  568041 command_runner.go:130] >     },
	I1008 18:37:11.352484  568041 command_runner.go:130] >     {
	I1008 18:37:11.352490  568041 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1008 18:37:11.352496  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352501  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1008 18:37:11.352515  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352521  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352528  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1008 18:37:11.352537  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1008 18:37:11.352544  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352547  568041 command_runner.go:130] >       "size": "95237600",
	I1008 18:37:11.352551  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352555  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352560  568041 command_runner.go:130] >       },
	I1008 18:37:11.352564  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352570  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352574  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352580  568041 command_runner.go:130] >     },
	I1008 18:37:11.352583  568041 command_runner.go:130] >     {
	I1008 18:37:11.352589  568041 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1008 18:37:11.352595  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352600  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1008 18:37:11.352605  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352614  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352624  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1008 18:37:11.352633  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1008 18:37:11.352637  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352640  568041 command_runner.go:130] >       "size": "89437508",
	I1008 18:37:11.352645  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352648  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352652  568041 command_runner.go:130] >       },
	I1008 18:37:11.352656  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352662  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352666  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352671  568041 command_runner.go:130] >     },
	I1008 18:37:11.352674  568041 command_runner.go:130] >     {
	I1008 18:37:11.352680  568041 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1008 18:37:11.352685  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352689  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1008 18:37:11.352694  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352697  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352719  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1008 18:37:11.352728  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1008 18:37:11.352731  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352735  568041 command_runner.go:130] >       "size": "92733849",
	I1008 18:37:11.352739  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.352742  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352746  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352750  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352752  568041 command_runner.go:130] >     },
	I1008 18:37:11.352755  568041 command_runner.go:130] >     {
	I1008 18:37:11.352761  568041 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1008 18:37:11.352764  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352769  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1008 18:37:11.352772  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352775  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352787  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1008 18:37:11.352793  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1008 18:37:11.352797  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352800  568041 command_runner.go:130] >       "size": "68420934",
	I1008 18:37:11.352803  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352807  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.352810  568041 command_runner.go:130] >       },
	I1008 18:37:11.352813  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352816  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352819  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.352822  568041 command_runner.go:130] >     },
	I1008 18:37:11.352830  568041 command_runner.go:130] >     {
	I1008 18:37:11.352836  568041 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1008 18:37:11.352839  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.352843  568041 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1008 18:37:11.352846  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352850  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.352856  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1008 18:37:11.352862  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1008 18:37:11.352864  568041 command_runner.go:130] >       ],
	I1008 18:37:11.352868  568041 command_runner.go:130] >       "size": "742080",
	I1008 18:37:11.352871  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.352875  568041 command_runner.go:130] >         "value": "65535"
	I1008 18:37:11.352878  568041 command_runner.go:130] >       },
	I1008 18:37:11.352882  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.352886  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.352889  568041 command_runner.go:130] >       "pinned": true
	I1008 18:37:11.352893  568041 command_runner.go:130] >     }
	I1008 18:37:11.352899  568041 command_runner.go:130] >   ]
	I1008 18:37:11.352902  568041 command_runner.go:130] > }
	I1008 18:37:11.353846  568041 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:37:11.353867  568041 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:37:11.353910  568041 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:37:11.383502  568041 command_runner.go:130] > {
	I1008 18:37:11.383528  568041 command_runner.go:130] >   "images": [
	I1008 18:37:11.383534  568041 command_runner.go:130] >     {
	I1008 18:37:11.383560  568041 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1008 18:37:11.383568  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383574  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1008 18:37:11.383582  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383588  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383605  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1008 18:37:11.383619  568041 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1008 18:37:11.383625  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383635  568041 command_runner.go:130] >       "size": "87190579",
	I1008 18:37:11.383644  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383653  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.383671  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383680  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383685  568041 command_runner.go:130] >     },
	I1008 18:37:11.383691  568041 command_runner.go:130] >     {
	I1008 18:37:11.383704  568041 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1008 18:37:11.383711  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383722  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1008 18:37:11.383731  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383740  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383752  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1008 18:37:11.383765  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1008 18:37:11.383771  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383775  568041 command_runner.go:130] >       "size": "1363676",
	I1008 18:37:11.383780  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383789  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.383795  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383799  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383802  568041 command_runner.go:130] >     },
	I1008 18:37:11.383805  568041 command_runner.go:130] >     {
	I1008 18:37:11.383811  568041 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1008 18:37:11.383817  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383822  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1008 18:37:11.383832  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383839  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383846  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1008 18:37:11.383855  568041 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1008 18:37:11.383858  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383862  568041 command_runner.go:130] >       "size": "31470524",
	I1008 18:37:11.383866  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383869  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.383874  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383878  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383884  568041 command_runner.go:130] >     },
	I1008 18:37:11.383887  568041 command_runner.go:130] >     {
	I1008 18:37:11.383893  568041 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1008 18:37:11.383897  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383902  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1008 18:37:11.383906  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383910  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383917  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1008 18:37:11.383933  568041 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1008 18:37:11.383939  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383942  568041 command_runner.go:130] >       "size": "63273227",
	I1008 18:37:11.383946  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.383950  568041 command_runner.go:130] >       "username": "nonroot",
	I1008 18:37:11.383956  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.383962  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.383965  568041 command_runner.go:130] >     },
	I1008 18:37:11.383968  568041 command_runner.go:130] >     {
	I1008 18:37:11.383974  568041 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1008 18:37:11.383978  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.383982  568041 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1008 18:37:11.383986  568041 command_runner.go:130] >       ],
	I1008 18:37:11.383990  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.383999  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1008 18:37:11.384178  568041 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1008 18:37:11.384302  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384316  568041 command_runner.go:130] >       "size": "149009664",
	I1008 18:37:11.384323  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.384330  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.384336  568041 command_runner.go:130] >       },
	I1008 18:37:11.384342  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384349  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384361  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384366  568041 command_runner.go:130] >     },
	I1008 18:37:11.384371  568041 command_runner.go:130] >     {
	I1008 18:37:11.384381  568041 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1008 18:37:11.384390  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384404  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1008 18:37:11.384411  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384419  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.384441  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1008 18:37:11.384457  568041 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1008 18:37:11.384490  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384566  568041 command_runner.go:130] >       "size": "95237600",
	I1008 18:37:11.384580  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.384593  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.384601  568041 command_runner.go:130] >       },
	I1008 18:37:11.384608  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384623  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384631  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384637  568041 command_runner.go:130] >     },
	I1008 18:37:11.384642  568041 command_runner.go:130] >     {
	I1008 18:37:11.384661  568041 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1008 18:37:11.384671  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384683  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1008 18:37:11.384689  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384695  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.384711  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1008 18:37:11.384723  568041 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1008 18:37:11.384730  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384742  568041 command_runner.go:130] >       "size": "89437508",
	I1008 18:37:11.384748  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.384754  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.384765  568041 command_runner.go:130] >       },
	I1008 18:37:11.384771  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384777  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384783  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384792  568041 command_runner.go:130] >     },
	I1008 18:37:11.384797  568041 command_runner.go:130] >     {
	I1008 18:37:11.384806  568041 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1008 18:37:11.384812  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384824  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1008 18:37:11.384834  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384841  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.384875  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1008 18:37:11.384892  568041 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1008 18:37:11.384903  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384910  568041 command_runner.go:130] >       "size": "92733849",
	I1008 18:37:11.384916  568041 command_runner.go:130] >       "uid": null,
	I1008 18:37:11.384922  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.384933  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.384939  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.384944  568041 command_runner.go:130] >     },
	I1008 18:37:11.384950  568041 command_runner.go:130] >     {
	I1008 18:37:11.384959  568041 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1008 18:37:11.384970  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.384978  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1008 18:37:11.384984  568041 command_runner.go:130] >       ],
	I1008 18:37:11.384990  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.385006  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1008 18:37:11.385017  568041 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1008 18:37:11.385022  568041 command_runner.go:130] >       ],
	I1008 18:37:11.385029  568041 command_runner.go:130] >       "size": "68420934",
	I1008 18:37:11.385039  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.385046  568041 command_runner.go:130] >         "value": "0"
	I1008 18:37:11.385051  568041 command_runner.go:130] >       },
	I1008 18:37:11.385057  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.385065  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.385071  568041 command_runner.go:130] >       "pinned": false
	I1008 18:37:11.385081  568041 command_runner.go:130] >     },
	I1008 18:37:11.385087  568041 command_runner.go:130] >     {
	I1008 18:37:11.385097  568041 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1008 18:37:11.385103  568041 command_runner.go:130] >       "repoTags": [
	I1008 18:37:11.385111  568041 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1008 18:37:11.385121  568041 command_runner.go:130] >       ],
	I1008 18:37:11.385127  568041 command_runner.go:130] >       "repoDigests": [
	I1008 18:37:11.385138  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1008 18:37:11.385158  568041 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1008 18:37:11.385163  568041 command_runner.go:130] >       ],
	I1008 18:37:11.385176  568041 command_runner.go:130] >       "size": "742080",
	I1008 18:37:11.385182  568041 command_runner.go:130] >       "uid": {
	I1008 18:37:11.385189  568041 command_runner.go:130] >         "value": "65535"
	I1008 18:37:11.385194  568041 command_runner.go:130] >       },
	I1008 18:37:11.385204  568041 command_runner.go:130] >       "username": "",
	I1008 18:37:11.385210  568041 command_runner.go:130] >       "spec": null,
	I1008 18:37:11.385215  568041 command_runner.go:130] >       "pinned": true
	I1008 18:37:11.385220  568041 command_runner.go:130] >     }
	I1008 18:37:11.385225  568041 command_runner.go:130] >   ]
	I1008 18:37:11.385229  568041 command_runner.go:130] > }
	I1008 18:37:11.385545  568041 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:37:11.385564  568041 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:37:11.385597  568041 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.31.1 crio true true} ...
	I1008 18:37:11.386241  568041 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-255508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:37:11.386343  568041 ssh_runner.go:195] Run: crio config
	I1008 18:37:11.427654  568041 command_runner.go:130] ! time="2024-10-08 18:37:11.394911843Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1008 18:37:11.433425  568041 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1008 18:37:11.445844  568041 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1008 18:37:11.445869  568041 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1008 18:37:11.445875  568041 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1008 18:37:11.445878  568041 command_runner.go:130] > #
	I1008 18:37:11.445885  568041 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1008 18:37:11.445890  568041 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1008 18:37:11.445896  568041 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1008 18:37:11.445918  568041 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1008 18:37:11.445922  568041 command_runner.go:130] > # reload'.
	I1008 18:37:11.445928  568041 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1008 18:37:11.445937  568041 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1008 18:37:11.445942  568041 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1008 18:37:11.445948  568041 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1008 18:37:11.445954  568041 command_runner.go:130] > [crio]
	I1008 18:37:11.445961  568041 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1008 18:37:11.445966  568041 command_runner.go:130] > # containers images, in this directory.
	I1008 18:37:11.445970  568041 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1008 18:37:11.445978  568041 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1008 18:37:11.445988  568041 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1008 18:37:11.445995  568041 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1008 18:37:11.445998  568041 command_runner.go:130] > # imagestore = ""
	I1008 18:37:11.446004  568041 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1008 18:37:11.446010  568041 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1008 18:37:11.446014  568041 command_runner.go:130] > storage_driver = "overlay"
	I1008 18:37:11.446020  568041 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1008 18:37:11.446026  568041 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1008 18:37:11.446029  568041 command_runner.go:130] > storage_option = [
	I1008 18:37:11.446034  568041 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1008 18:37:11.446039  568041 command_runner.go:130] > ]
	I1008 18:37:11.446045  568041 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1008 18:37:11.446051  568041 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1008 18:37:11.446056  568041 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1008 18:37:11.446061  568041 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1008 18:37:11.446067  568041 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1008 18:37:11.446073  568041 command_runner.go:130] > # always happen on a node reboot
	I1008 18:37:11.446077  568041 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1008 18:37:11.446086  568041 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1008 18:37:11.446094  568041 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1008 18:37:11.446099  568041 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1008 18:37:11.446105  568041 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1008 18:37:11.446112  568041 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1008 18:37:11.446119  568041 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1008 18:37:11.446123  568041 command_runner.go:130] > # internal_wipe = true
	I1008 18:37:11.446130  568041 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1008 18:37:11.446135  568041 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1008 18:37:11.446145  568041 command_runner.go:130] > # internal_repair = false
	I1008 18:37:11.446151  568041 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1008 18:37:11.446157  568041 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1008 18:37:11.446162  568041 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1008 18:37:11.446167  568041 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1008 18:37:11.446174  568041 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1008 18:37:11.446178  568041 command_runner.go:130] > [crio.api]
	I1008 18:37:11.446184  568041 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1008 18:37:11.446191  568041 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1008 18:37:11.446196  568041 command_runner.go:130] > # IP address on which the stream server will listen.
	I1008 18:37:11.446200  568041 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1008 18:37:11.446206  568041 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1008 18:37:11.446211  568041 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1008 18:37:11.446215  568041 command_runner.go:130] > # stream_port = "0"
	I1008 18:37:11.446220  568041 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1008 18:37:11.446226  568041 command_runner.go:130] > # stream_enable_tls = false
	I1008 18:37:11.446231  568041 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1008 18:37:11.446235  568041 command_runner.go:130] > # stream_idle_timeout = ""
	I1008 18:37:11.446240  568041 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1008 18:37:11.446248  568041 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1008 18:37:11.446252  568041 command_runner.go:130] > # minutes.
	I1008 18:37:11.446255  568041 command_runner.go:130] > # stream_tls_cert = ""
	I1008 18:37:11.446262  568041 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1008 18:37:11.446269  568041 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1008 18:37:11.446273  568041 command_runner.go:130] > # stream_tls_key = ""
	I1008 18:37:11.446281  568041 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1008 18:37:11.446287  568041 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1008 18:37:11.446306  568041 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1008 18:37:11.446312  568041 command_runner.go:130] > # stream_tls_ca = ""
	I1008 18:37:11.446332  568041 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 18:37:11.446339  568041 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1008 18:37:11.446352  568041 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1008 18:37:11.446359  568041 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1008 18:37:11.446369  568041 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1008 18:37:11.446377  568041 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1008 18:37:11.446382  568041 command_runner.go:130] > [crio.runtime]
	I1008 18:37:11.446389  568041 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1008 18:37:11.446394  568041 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1008 18:37:11.446400  568041 command_runner.go:130] > # "nofile=1024:2048"
	I1008 18:37:11.446406  568041 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1008 18:37:11.446412  568041 command_runner.go:130] > # default_ulimits = [
	I1008 18:37:11.446415  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446421  568041 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1008 18:37:11.446425  568041 command_runner.go:130] > # no_pivot = false
	I1008 18:37:11.446433  568041 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1008 18:37:11.446441  568041 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1008 18:37:11.446445  568041 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1008 18:37:11.446451  568041 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1008 18:37:11.446456  568041 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1008 18:37:11.446464  568041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 18:37:11.446469  568041 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1008 18:37:11.446475  568041 command_runner.go:130] > # Cgroup setting for conmon
	I1008 18:37:11.446482  568041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1008 18:37:11.446488  568041 command_runner.go:130] > conmon_cgroup = "pod"
	I1008 18:37:11.446494  568041 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1008 18:37:11.446500  568041 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1008 18:37:11.446511  568041 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1008 18:37:11.446516  568041 command_runner.go:130] > conmon_env = [
	I1008 18:37:11.446522  568041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1008 18:37:11.446527  568041 command_runner.go:130] > ]
	I1008 18:37:11.446532  568041 command_runner.go:130] > # Additional environment variables to set for all the
	I1008 18:37:11.446537  568041 command_runner.go:130] > # containers. These are overridden if set in the
	I1008 18:37:11.446544  568041 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1008 18:37:11.446548  568041 command_runner.go:130] > # default_env = [
	I1008 18:37:11.446553  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446558  568041 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1008 18:37:11.446570  568041 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1008 18:37:11.446576  568041 command_runner.go:130] > # selinux = false
	I1008 18:37:11.446582  568041 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1008 18:37:11.446588  568041 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1008 18:37:11.446595  568041 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1008 18:37:11.446599  568041 command_runner.go:130] > # seccomp_profile = ""
	I1008 18:37:11.446604  568041 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1008 18:37:11.446610  568041 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1008 18:37:11.446615  568041 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1008 18:37:11.446622  568041 command_runner.go:130] > # which might increase security.
	I1008 18:37:11.446626  568041 command_runner.go:130] > # This option is currently deprecated,
	I1008 18:37:11.446631  568041 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1008 18:37:11.446638  568041 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1008 18:37:11.446643  568041 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1008 18:37:11.446650  568041 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1008 18:37:11.446658  568041 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1008 18:37:11.446666  568041 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1008 18:37:11.446670  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.446675  568041 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1008 18:37:11.446682  568041 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1008 18:37:11.446686  568041 command_runner.go:130] > # the cgroup blockio controller.
	I1008 18:37:11.446692  568041 command_runner.go:130] > # blockio_config_file = ""
	I1008 18:37:11.446698  568041 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1008 18:37:11.446703  568041 command_runner.go:130] > # blockio parameters.
	I1008 18:37:11.446707  568041 command_runner.go:130] > # blockio_reload = false
	I1008 18:37:11.446713  568041 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1008 18:37:11.446719  568041 command_runner.go:130] > # irqbalance daemon.
	I1008 18:37:11.446724  568041 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1008 18:37:11.446729  568041 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1008 18:37:11.446736  568041 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1008 18:37:11.446742  568041 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1008 18:37:11.446749  568041 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1008 18:37:11.446756  568041 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1008 18:37:11.446769  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.446774  568041 command_runner.go:130] > # rdt_config_file = ""
	I1008 18:37:11.446779  568041 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1008 18:37:11.446785  568041 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1008 18:37:11.446813  568041 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1008 18:37:11.446820  568041 command_runner.go:130] > # separate_pull_cgroup = ""
	I1008 18:37:11.446826  568041 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1008 18:37:11.446834  568041 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1008 18:37:11.446840  568041 command_runner.go:130] > # will be added.
	I1008 18:37:11.446844  568041 command_runner.go:130] > # default_capabilities = [
	I1008 18:37:11.446847  568041 command_runner.go:130] > # 	"CHOWN",
	I1008 18:37:11.446851  568041 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1008 18:37:11.446854  568041 command_runner.go:130] > # 	"FSETID",
	I1008 18:37:11.446858  568041 command_runner.go:130] > # 	"FOWNER",
	I1008 18:37:11.446862  568041 command_runner.go:130] > # 	"SETGID",
	I1008 18:37:11.446865  568041 command_runner.go:130] > # 	"SETUID",
	I1008 18:37:11.446869  568041 command_runner.go:130] > # 	"SETPCAP",
	I1008 18:37:11.446872  568041 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1008 18:37:11.446876  568041 command_runner.go:130] > # 	"KILL",
	I1008 18:37:11.446879  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446889  568041 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1008 18:37:11.446897  568041 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1008 18:37:11.446901  568041 command_runner.go:130] > # add_inheritable_capabilities = false
	I1008 18:37:11.446909  568041 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1008 18:37:11.446916  568041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 18:37:11.446920  568041 command_runner.go:130] > default_sysctls = [
	I1008 18:37:11.446926  568041 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1008 18:37:11.446929  568041 command_runner.go:130] > ]
	I1008 18:37:11.446933  568041 command_runner.go:130] > # List of devices on the host that a
	I1008 18:37:11.446940  568041 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1008 18:37:11.446944  568041 command_runner.go:130] > # allowed_devices = [
	I1008 18:37:11.446947  568041 command_runner.go:130] > # 	"/dev/fuse",
	I1008 18:37:11.446950  568041 command_runner.go:130] > # ]
	I1008 18:37:11.446977  568041 command_runner.go:130] > # List of additional devices. specified as
	I1008 18:37:11.446992  568041 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1008 18:37:11.447000  568041 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1008 18:37:11.447006  568041 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1008 18:37:11.447012  568041 command_runner.go:130] > # additional_devices = [
	I1008 18:37:11.447015  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447020  568041 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1008 18:37:11.447026  568041 command_runner.go:130] > # cdi_spec_dirs = [
	I1008 18:37:11.447029  568041 command_runner.go:130] > # 	"/etc/cdi",
	I1008 18:37:11.447033  568041 command_runner.go:130] > # 	"/var/run/cdi",
	I1008 18:37:11.447038  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447044  568041 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1008 18:37:11.447052  568041 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1008 18:37:11.447056  568041 command_runner.go:130] > # Defaults to false.
	I1008 18:37:11.447063  568041 command_runner.go:130] > # device_ownership_from_security_context = false
	I1008 18:37:11.447068  568041 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1008 18:37:11.447075  568041 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1008 18:37:11.447078  568041 command_runner.go:130] > # hooks_dir = [
	I1008 18:37:11.447083  568041 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1008 18:37:11.447088  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447094  568041 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1008 18:37:11.447102  568041 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1008 18:37:11.447107  568041 command_runner.go:130] > # its default mounts from the following two files:
	I1008 18:37:11.447110  568041 command_runner.go:130] > #
	I1008 18:37:11.447116  568041 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1008 18:37:11.447124  568041 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1008 18:37:11.447129  568041 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1008 18:37:11.447134  568041 command_runner.go:130] > #
	I1008 18:37:11.447140  568041 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1008 18:37:11.447148  568041 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1008 18:37:11.447153  568041 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1008 18:37:11.447163  568041 command_runner.go:130] > #      only add mounts it finds in this file.
	I1008 18:37:11.447169  568041 command_runner.go:130] > #
	I1008 18:37:11.447177  568041 command_runner.go:130] > # default_mounts_file = ""
	I1008 18:37:11.447184  568041 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1008 18:37:11.447191  568041 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1008 18:37:11.447194  568041 command_runner.go:130] > pids_limit = 1024
	I1008 18:37:11.447200  568041 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1008 18:37:11.447209  568041 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1008 18:37:11.447215  568041 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1008 18:37:11.447225  568041 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1008 18:37:11.447229  568041 command_runner.go:130] > # log_size_max = -1
	I1008 18:37:11.447235  568041 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1008 18:37:11.447241  568041 command_runner.go:130] > # log_to_journald = false
	I1008 18:37:11.447247  568041 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1008 18:37:11.447254  568041 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1008 18:37:11.447260  568041 command_runner.go:130] > # Path to directory for container attach sockets.
	I1008 18:37:11.447267  568041 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1008 18:37:11.447272  568041 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1008 18:37:11.447277  568041 command_runner.go:130] > # bind_mount_prefix = ""
	I1008 18:37:11.447282  568041 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1008 18:37:11.447288  568041 command_runner.go:130] > # read_only = false
	I1008 18:37:11.447293  568041 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1008 18:37:11.447299  568041 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1008 18:37:11.447305  568041 command_runner.go:130] > # live configuration reload.
	I1008 18:37:11.447309  568041 command_runner.go:130] > # log_level = "info"
	I1008 18:37:11.447314  568041 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1008 18:37:11.447321  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.447325  568041 command_runner.go:130] > # log_filter = ""
	I1008 18:37:11.447332  568041 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1008 18:37:11.447340  568041 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1008 18:37:11.447346  568041 command_runner.go:130] > # separated by comma.
	I1008 18:37:11.447353  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447366  568041 command_runner.go:130] > # uid_mappings = ""
	I1008 18:37:11.447376  568041 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1008 18:37:11.447382  568041 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1008 18:37:11.447393  568041 command_runner.go:130] > # separated by comma.
	I1008 18:37:11.447401  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447408  568041 command_runner.go:130] > # gid_mappings = ""
	I1008 18:37:11.447414  568041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1008 18:37:11.447422  568041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 18:37:11.447428  568041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 18:37:11.447437  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447441  568041 command_runner.go:130] > # minimum_mappable_uid = -1
	I1008 18:37:11.447448  568041 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1008 18:37:11.447455  568041 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1008 18:37:11.447461  568041 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1008 18:37:11.447470  568041 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1008 18:37:11.447474  568041 command_runner.go:130] > # minimum_mappable_gid = -1
	I1008 18:37:11.447479  568041 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1008 18:37:11.447486  568041 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1008 18:37:11.447491  568041 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1008 18:37:11.447498  568041 command_runner.go:130] > # ctr_stop_timeout = 30
	I1008 18:37:11.447503  568041 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1008 18:37:11.447512  568041 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1008 18:37:11.447519  568041 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1008 18:37:11.447524  568041 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1008 18:37:11.447530  568041 command_runner.go:130] > drop_infra_ctr = false
	I1008 18:37:11.447536  568041 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1008 18:37:11.447544  568041 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1008 18:37:11.447550  568041 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1008 18:37:11.447556  568041 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1008 18:37:11.447563  568041 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1008 18:37:11.447569  568041 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1008 18:37:11.447574  568041 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1008 18:37:11.447582  568041 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1008 18:37:11.447585  568041 command_runner.go:130] > # shared_cpuset = ""
	I1008 18:37:11.447593  568041 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1008 18:37:11.447597  568041 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1008 18:37:11.447607  568041 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1008 18:37:11.447616  568041 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1008 18:37:11.447622  568041 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1008 18:37:11.447628  568041 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1008 18:37:11.447636  568041 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1008 18:37:11.447641  568041 command_runner.go:130] > # enable_criu_support = false
	I1008 18:37:11.447645  568041 command_runner.go:130] > # Enable/disable the generation of the container,
	I1008 18:37:11.447651  568041 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1008 18:37:11.447655  568041 command_runner.go:130] > # enable_pod_events = false
	I1008 18:37:11.447661  568041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 18:37:11.447669  568041 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1008 18:37:11.447674  568041 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1008 18:37:11.447680  568041 command_runner.go:130] > # default_runtime = "runc"
	I1008 18:37:11.447685  568041 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1008 18:37:11.447694  568041 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1008 18:37:11.447702  568041 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1008 18:37:11.447709  568041 command_runner.go:130] > # creation as a file is not desired either.
	I1008 18:37:11.447717  568041 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1008 18:37:11.447724  568041 command_runner.go:130] > # the hostname is being managed dynamically.
	I1008 18:37:11.447728  568041 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1008 18:37:11.447731  568041 command_runner.go:130] > # ]
	I1008 18:37:11.447737  568041 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1008 18:37:11.447745  568041 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1008 18:37:11.447751  568041 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1008 18:37:11.447759  568041 command_runner.go:130] > # Each entry in the table should follow the format:
	I1008 18:37:11.447763  568041 command_runner.go:130] > #
	I1008 18:37:11.447767  568041 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1008 18:37:11.447772  568041 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1008 18:37:11.447800  568041 command_runner.go:130] > # runtime_type = "oci"
	I1008 18:37:11.447807  568041 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1008 18:37:11.447812  568041 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1008 18:37:11.447818  568041 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1008 18:37:11.447823  568041 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1008 18:37:11.447834  568041 command_runner.go:130] > # monitor_env = []
	I1008 18:37:11.447841  568041 command_runner.go:130] > # privileged_without_host_devices = false
	I1008 18:37:11.447845  568041 command_runner.go:130] > # allowed_annotations = []
	I1008 18:37:11.447850  568041 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1008 18:37:11.447855  568041 command_runner.go:130] > # Where:
	I1008 18:37:11.447859  568041 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1008 18:37:11.447865  568041 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1008 18:37:11.447873  568041 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1008 18:37:11.447879  568041 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1008 18:37:11.447887  568041 command_runner.go:130] > #   in $PATH.
	I1008 18:37:11.447893  568041 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1008 18:37:11.447897  568041 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1008 18:37:11.447906  568041 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1008 18:37:11.447912  568041 command_runner.go:130] > #   state.
	I1008 18:37:11.447918  568041 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1008 18:37:11.447926  568041 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1008 18:37:11.447932  568041 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1008 18:37:11.447939  568041 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1008 18:37:11.447945  568041 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1008 18:37:11.447953  568041 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1008 18:37:11.447958  568041 command_runner.go:130] > #   The currently recognized values are:
	I1008 18:37:11.447965  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1008 18:37:11.447974  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1008 18:37:11.447979  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1008 18:37:11.447988  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1008 18:37:11.447994  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1008 18:37:11.448002  568041 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1008 18:37:11.448009  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1008 18:37:11.448017  568041 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1008 18:37:11.448023  568041 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1008 18:37:11.448031  568041 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1008 18:37:11.448035  568041 command_runner.go:130] > #   deprecated option "conmon".
	I1008 18:37:11.448042  568041 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1008 18:37:11.448049  568041 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1008 18:37:11.448057  568041 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1008 18:37:11.448062  568041 command_runner.go:130] > #   should be moved to the container's cgroup
	I1008 18:37:11.448068  568041 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1008 18:37:11.448076  568041 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1008 18:37:11.448081  568041 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1008 18:37:11.448087  568041 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1008 18:37:11.448090  568041 command_runner.go:130] > #
	I1008 18:37:11.448097  568041 command_runner.go:130] > # Using the seccomp notifier feature:
	I1008 18:37:11.448105  568041 command_runner.go:130] > #
	I1008 18:37:11.448111  568041 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1008 18:37:11.448119  568041 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1008 18:37:11.448123  568041 command_runner.go:130] > #
	I1008 18:37:11.448128  568041 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1008 18:37:11.448134  568041 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1008 18:37:11.448140  568041 command_runner.go:130] > #
	I1008 18:37:11.448146  568041 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1008 18:37:11.448150  568041 command_runner.go:130] > # feature.
	I1008 18:37:11.448155  568041 command_runner.go:130] > #
	I1008 18:37:11.448160  568041 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1008 18:37:11.448166  568041 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1008 18:37:11.448173  568041 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1008 18:37:11.448179  568041 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1008 18:37:11.448187  568041 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1008 18:37:11.448190  568041 command_runner.go:130] > #
	I1008 18:37:11.448195  568041 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1008 18:37:11.448201  568041 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1008 18:37:11.448205  568041 command_runner.go:130] > #
	I1008 18:37:11.448210  568041 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1008 18:37:11.448218  568041 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1008 18:37:11.448221  568041 command_runner.go:130] > #
	I1008 18:37:11.448226  568041 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1008 18:37:11.448234  568041 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1008 18:37:11.448241  568041 command_runner.go:130] > # limitation.
	I1008 18:37:11.448249  568041 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1008 18:37:11.448253  568041 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1008 18:37:11.448258  568041 command_runner.go:130] > runtime_type = "oci"
	I1008 18:37:11.448263  568041 command_runner.go:130] > runtime_root = "/run/runc"
	I1008 18:37:11.448269  568041 command_runner.go:130] > runtime_config_path = ""
	I1008 18:37:11.448273  568041 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1008 18:37:11.448278  568041 command_runner.go:130] > monitor_cgroup = "pod"
	I1008 18:37:11.448283  568041 command_runner.go:130] > monitor_exec_cgroup = ""
	I1008 18:37:11.448288  568041 command_runner.go:130] > monitor_env = [
	I1008 18:37:11.448293  568041 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1008 18:37:11.448298  568041 command_runner.go:130] > ]
	I1008 18:37:11.448303  568041 command_runner.go:130] > privileged_without_host_devices = false
	I1008 18:37:11.448309  568041 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1008 18:37:11.448316  568041 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1008 18:37:11.448322  568041 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1008 18:37:11.448331  568041 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1008 18:37:11.448340  568041 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1008 18:37:11.448347  568041 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1008 18:37:11.448356  568041 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1008 18:37:11.448366  568041 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1008 18:37:11.448373  568041 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1008 18:37:11.448379  568041 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1008 18:37:11.448385  568041 command_runner.go:130] > # Example:
	I1008 18:37:11.448389  568041 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1008 18:37:11.448393  568041 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1008 18:37:11.448398  568041 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1008 18:37:11.448405  568041 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1008 18:37:11.448409  568041 command_runner.go:130] > # cpuset = 0
	I1008 18:37:11.448415  568041 command_runner.go:130] > # cpushares = "0-1"
	I1008 18:37:11.448419  568041 command_runner.go:130] > # Where:
	I1008 18:37:11.448425  568041 command_runner.go:130] > # The workload name is workload-type.
	I1008 18:37:11.448431  568041 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1008 18:37:11.448443  568041 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1008 18:37:11.448450  568041 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1008 18:37:11.448457  568041 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1008 18:37:11.448465  568041 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1008 18:37:11.448470  568041 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1008 18:37:11.448478  568041 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1008 18:37:11.448483  568041 command_runner.go:130] > # Default value is set to true
	I1008 18:37:11.448489  568041 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1008 18:37:11.448494  568041 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1008 18:37:11.448501  568041 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1008 18:37:11.448505  568041 command_runner.go:130] > # Default value is set to 'false'
	I1008 18:37:11.448521  568041 command_runner.go:130] > # disable_hostport_mapping = false
	I1008 18:37:11.448527  568041 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1008 18:37:11.448530  568041 command_runner.go:130] > #
	I1008 18:37:11.448535  568041 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1008 18:37:11.448540  568041 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1008 18:37:11.448546  568041 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1008 18:37:11.448552  568041 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1008 18:37:11.448557  568041 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1008 18:37:11.448563  568041 command_runner.go:130] > [crio.image]
	I1008 18:37:11.448568  568041 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1008 18:37:11.448572  568041 command_runner.go:130] > # default_transport = "docker://"
	I1008 18:37:11.448577  568041 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1008 18:37:11.448583  568041 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1008 18:37:11.448586  568041 command_runner.go:130] > # global_auth_file = ""
	I1008 18:37:11.448591  568041 command_runner.go:130] > # The image used to instantiate infra containers.
	I1008 18:37:11.448595  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.448600  568041 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1008 18:37:11.448606  568041 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1008 18:37:11.448611  568041 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1008 18:37:11.448615  568041 command_runner.go:130] > # This option supports live configuration reload.
	I1008 18:37:11.448619  568041 command_runner.go:130] > # pause_image_auth_file = ""
	I1008 18:37:11.448624  568041 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1008 18:37:11.448632  568041 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1008 18:37:11.448637  568041 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1008 18:37:11.448642  568041 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1008 18:37:11.448646  568041 command_runner.go:130] > # pause_command = "/pause"
	I1008 18:37:11.448651  568041 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1008 18:37:11.448657  568041 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1008 18:37:11.448662  568041 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1008 18:37:11.448669  568041 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1008 18:37:11.448674  568041 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1008 18:37:11.448679  568041 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1008 18:37:11.448682  568041 command_runner.go:130] > # pinned_images = [
	I1008 18:37:11.448685  568041 command_runner.go:130] > # ]
	I1008 18:37:11.448691  568041 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1008 18:37:11.448697  568041 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1008 18:37:11.448705  568041 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1008 18:37:11.448712  568041 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1008 18:37:11.448716  568041 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1008 18:37:11.448720  568041 command_runner.go:130] > # signature_policy = ""
	I1008 18:37:11.448725  568041 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1008 18:37:11.448731  568041 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1008 18:37:11.448736  568041 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1008 18:37:11.448744  568041 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1008 18:37:11.448749  568041 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1008 18:37:11.448754  568041 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1008 18:37:11.448762  568041 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1008 18:37:11.448768  568041 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1008 18:37:11.448772  568041 command_runner.go:130] > # changing them here.
	I1008 18:37:11.448779  568041 command_runner.go:130] > # insecure_registries = [
	I1008 18:37:11.448783  568041 command_runner.go:130] > # ]
	I1008 18:37:11.448788  568041 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1008 18:37:11.448794  568041 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1008 18:37:11.448798  568041 command_runner.go:130] > # image_volumes = "mkdir"
	I1008 18:37:11.448806  568041 command_runner.go:130] > # Temporary directory to use for storing big files
	I1008 18:37:11.448812  568041 command_runner.go:130] > # big_files_temporary_dir = ""
	I1008 18:37:11.448821  568041 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1008 18:37:11.448824  568041 command_runner.go:130] > # CNI plugins.
	I1008 18:37:11.448829  568041 command_runner.go:130] > [crio.network]
	I1008 18:37:11.448835  568041 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1008 18:37:11.448842  568041 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1008 18:37:11.448846  568041 command_runner.go:130] > # cni_default_network = ""
	I1008 18:37:11.448853  568041 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1008 18:37:11.448858  568041 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1008 18:37:11.448864  568041 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1008 18:37:11.448869  568041 command_runner.go:130] > # plugin_dirs = [
	I1008 18:37:11.448873  568041 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1008 18:37:11.448876  568041 command_runner.go:130] > # ]
	I1008 18:37:11.448882  568041 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1008 18:37:11.448886  568041 command_runner.go:130] > [crio.metrics]
	I1008 18:37:11.448890  568041 command_runner.go:130] > # Globally enable or disable metrics support.
	I1008 18:37:11.448896  568041 command_runner.go:130] > enable_metrics = true
	I1008 18:37:11.448901  568041 command_runner.go:130] > # Specify enabled metrics collectors.
	I1008 18:37:11.448907  568041 command_runner.go:130] > # Per default all metrics are enabled.
	I1008 18:37:11.448913  568041 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1008 18:37:11.448922  568041 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1008 18:37:11.448930  568041 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1008 18:37:11.448934  568041 command_runner.go:130] > # metrics_collectors = [
	I1008 18:37:11.448939  568041 command_runner.go:130] > # 	"operations",
	I1008 18:37:11.448944  568041 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1008 18:37:11.448948  568041 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1008 18:37:11.448954  568041 command_runner.go:130] > # 	"operations_errors",
	I1008 18:37:11.448958  568041 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1008 18:37:11.448964  568041 command_runner.go:130] > # 	"image_pulls_by_name",
	I1008 18:37:11.448968  568041 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1008 18:37:11.448974  568041 command_runner.go:130] > # 	"image_pulls_failures",
	I1008 18:37:11.448980  568041 command_runner.go:130] > # 	"image_pulls_successes",
	I1008 18:37:11.448984  568041 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1008 18:37:11.448990  568041 command_runner.go:130] > # 	"image_layer_reuse",
	I1008 18:37:11.448996  568041 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1008 18:37:11.449000  568041 command_runner.go:130] > # 	"containers_oom_total",
	I1008 18:37:11.449004  568041 command_runner.go:130] > # 	"containers_oom",
	I1008 18:37:11.449008  568041 command_runner.go:130] > # 	"processes_defunct",
	I1008 18:37:11.449014  568041 command_runner.go:130] > # 	"operations_total",
	I1008 18:37:11.449017  568041 command_runner.go:130] > # 	"operations_latency_seconds",
	I1008 18:37:11.449022  568041 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1008 18:37:11.449026  568041 command_runner.go:130] > # 	"operations_errors_total",
	I1008 18:37:11.449030  568041 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1008 18:37:11.449035  568041 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1008 18:37:11.449039  568041 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1008 18:37:11.449043  568041 command_runner.go:130] > # 	"image_pulls_success_total",
	I1008 18:37:11.449049  568041 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1008 18:37:11.449053  568041 command_runner.go:130] > # 	"containers_oom_count_total",
	I1008 18:37:11.449057  568041 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1008 18:37:11.449063  568041 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1008 18:37:11.449067  568041 command_runner.go:130] > # ]
	I1008 18:37:11.449072  568041 command_runner.go:130] > # The port on which the metrics server will listen.
	I1008 18:37:11.449077  568041 command_runner.go:130] > # metrics_port = 9090
	I1008 18:37:11.449082  568041 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1008 18:37:11.449086  568041 command_runner.go:130] > # metrics_socket = ""
	I1008 18:37:11.449091  568041 command_runner.go:130] > # The certificate for the secure metrics server.
	I1008 18:37:11.449098  568041 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1008 18:37:11.449104  568041 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1008 18:37:11.449111  568041 command_runner.go:130] > # certificate on any modification event.
	I1008 18:37:11.449114  568041 command_runner.go:130] > # metrics_cert = ""
	I1008 18:37:11.449119  568041 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1008 18:37:11.449126  568041 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1008 18:37:11.449130  568041 command_runner.go:130] > # metrics_key = ""
	I1008 18:37:11.449137  568041 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1008 18:37:11.449141  568041 command_runner.go:130] > [crio.tracing]
	I1008 18:37:11.449146  568041 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1008 18:37:11.449151  568041 command_runner.go:130] > # enable_tracing = false
	I1008 18:37:11.449157  568041 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1008 18:37:11.449163  568041 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1008 18:37:11.449170  568041 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1008 18:37:11.449176  568041 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1008 18:37:11.449180  568041 command_runner.go:130] > # CRI-O NRI configuration.
	I1008 18:37:11.449185  568041 command_runner.go:130] > [crio.nri]
	I1008 18:37:11.449189  568041 command_runner.go:130] > # Globally enable or disable NRI.
	I1008 18:37:11.449195  568041 command_runner.go:130] > # enable_nri = false
	I1008 18:37:11.449201  568041 command_runner.go:130] > # NRI socket to listen on.
	I1008 18:37:11.449207  568041 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1008 18:37:11.449212  568041 command_runner.go:130] > # NRI plugin directory to use.
	I1008 18:37:11.449216  568041 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1008 18:37:11.449221  568041 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1008 18:37:11.449228  568041 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1008 18:37:11.449233  568041 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1008 18:37:11.449238  568041 command_runner.go:130] > # nri_disable_connections = false
	I1008 18:37:11.449245  568041 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1008 18:37:11.449250  568041 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1008 18:37:11.449257  568041 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1008 18:37:11.449261  568041 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1008 18:37:11.449269  568041 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1008 18:37:11.449273  568041 command_runner.go:130] > [crio.stats]
	I1008 18:37:11.449280  568041 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1008 18:37:11.449286  568041 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1008 18:37:11.449290  568041 command_runner.go:130] > # stats_collection_period = 0
	I1008 18:37:11.449362  568041 cni.go:84] Creating CNI manager for ""
	I1008 18:37:11.449373  568041 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1008 18:37:11.449392  568041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:37:11.449415  568041 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-255508 NodeName:multinode-255508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:37:11.449562  568041 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-255508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:37:11.449637  568041 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:37:11.460294  568041 command_runner.go:130] > kubeadm
	I1008 18:37:11.460318  568041 command_runner.go:130] > kubectl
	I1008 18:37:11.460322  568041 command_runner.go:130] > kubelet
	I1008 18:37:11.460684  568041 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:37:11.460750  568041 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:37:11.469759  568041 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1008 18:37:11.485750  568041 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:37:11.501258  568041 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1008 18:37:11.517424  568041 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I1008 18:37:11.521128  568041 command_runner.go:130] > 192.168.39.43	control-plane.minikube.internal
	I1008 18:37:11.521191  568041 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:37:11.661762  568041 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:37:11.675953  568041 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508 for IP: 192.168.39.43
	I1008 18:37:11.675974  568041 certs.go:194] generating shared ca certs ...
	I1008 18:37:11.675992  568041 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:37:11.676168  568041 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:37:11.676207  568041 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:37:11.676217  568041 certs.go:256] generating profile certs ...
	I1008 18:37:11.676294  568041 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/client.key
	I1008 18:37:11.676345  568041 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.key.a701f6f9
	I1008 18:37:11.676392  568041 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.key
	I1008 18:37:11.676403  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 18:37:11.676419  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 18:37:11.676431  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 18:37:11.676443  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 18:37:11.676456  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 18:37:11.676468  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 18:37:11.676480  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 18:37:11.676492  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 18:37:11.676542  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:37:11.676569  568041 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:37:11.676577  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:37:11.676600  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:37:11.676626  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:37:11.676646  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:37:11.676682  568041 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:37:11.676707  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> /usr/share/ca-certificates/5370132.pem
	I1008 18:37:11.676724  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.676741  568041 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem -> /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.677320  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:37:11.701099  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:37:11.724839  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:37:11.748617  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:37:11.772150  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 18:37:11.795325  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:37:11.819052  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:37:11.842130  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/multinode-255508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:37:11.865438  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:37:11.888085  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:37:11.911184  568041 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:37:11.934370  568041 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:37:11.950108  568041 ssh_runner.go:195] Run: openssl version
	I1008 18:37:11.955596  568041 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1008 18:37:11.955690  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:37:11.966034  568041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.970050  568041 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.970307  568041 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.970359  568041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:37:11.975488  568041 command_runner.go:130] > b5213941
	I1008 18:37:11.975573  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:37:11.984017  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:37:11.993799  568041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.997726  568041 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.997867  568041 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:37:11.997899  568041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:37:12.003163  568041 command_runner.go:130] > 51391683
	I1008 18:37:12.003224  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:37:12.011824  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:37:12.022027  568041 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.026103  568041 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.026145  568041 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.026175  568041 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:37:12.031707  568041 command_runner.go:130] > 3ec20f2e
	I1008 18:37:12.031770  568041 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:37:12.040534  568041 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:37:12.044671  568041 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:37:12.044691  568041 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1008 18:37:12.044698  568041 command_runner.go:130] > Device: 253,1	Inode: 1054760     Links: 1
	I1008 18:37:12.044704  568041 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1008 18:37:12.044709  568041 command_runner.go:130] > Access: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044717  568041 command_runner.go:130] > Modify: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044729  568041 command_runner.go:130] > Change: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044740  568041 command_runner.go:130] >  Birth: 2024-10-08 18:30:38.711449003 +0000
	I1008 18:37:12.044795  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:37:12.050040  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.050105  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:37:12.055285  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.055336  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:37:12.060813  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.060925  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:37:12.065885  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.066034  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:37:12.071334  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.071398  568041 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:37:12.076390  568041 command_runner.go:130] > Certificate will not expire
	I1008 18:37:12.076554  568041 kubeadm.go:392] StartCluster: {Name:multinode-255508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-255508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.35 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:37:12.076691  568041 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:37:12.076729  568041 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:37:12.116861  568041 command_runner.go:130] > 672db4e9258153019940a867d1ad7d2253520762b6f23667dc5f5ef6e45d9318
	I1008 18:37:12.116895  568041 command_runner.go:130] > dbb17614f252c7bcbb0b8617e0310a7180ecd542750142b96cc63ae40345bd27
	I1008 18:37:12.116904  568041 command_runner.go:130] > 741cf09d69c22d616c5d54ab640f3f0d2229986097f1709c9a7cd52a92adbf8c
	I1008 18:37:12.116914  568041 command_runner.go:130] > c7c3519a922cdc33a9c9d911b58ba912091793679ccef944c75e4701cad7817f
	I1008 18:37:12.116923  568041 command_runner.go:130] > 6c1c60b60438057fd01ceecf74b3b223b69a378532b6ab5692e09a954c28569a
	I1008 18:37:12.116932  568041 command_runner.go:130] > 042f2bb068a141f95a10c6f223bdd18c22923616806263786c49a5cbee04d328
	I1008 18:37:12.116940  568041 command_runner.go:130] > 694038df9e668a5e55f19956048aab8d5a860b9b011446b24779138d4859b105
	I1008 18:37:12.116953  568041 command_runner.go:130] > 0cb8bb904b7b859112685b06aa32674e1f0fdeb6f1c9b970e6369d9988d9c74d
	I1008 18:37:12.116978  568041 cri.go:89] found id: "672db4e9258153019940a867d1ad7d2253520762b6f23667dc5f5ef6e45d9318"
	I1008 18:37:12.116987  568041 cri.go:89] found id: "dbb17614f252c7bcbb0b8617e0310a7180ecd542750142b96cc63ae40345bd27"
	I1008 18:37:12.116990  568041 cri.go:89] found id: "741cf09d69c22d616c5d54ab640f3f0d2229986097f1709c9a7cd52a92adbf8c"
	I1008 18:37:12.116994  568041 cri.go:89] found id: "c7c3519a922cdc33a9c9d911b58ba912091793679ccef944c75e4701cad7817f"
	I1008 18:37:12.116997  568041 cri.go:89] found id: "6c1c60b60438057fd01ceecf74b3b223b69a378532b6ab5692e09a954c28569a"
	I1008 18:37:12.117003  568041 cri.go:89] found id: "042f2bb068a141f95a10c6f223bdd18c22923616806263786c49a5cbee04d328"
	I1008 18:37:12.117006  568041 cri.go:89] found id: "694038df9e668a5e55f19956048aab8d5a860b9b011446b24779138d4859b105"
	I1008 18:37:12.117009  568041 cri.go:89] found id: "0cb8bb904b7b859112685b06aa32674e1f0fdeb6f1c9b970e6369d9988d9c74d"
	I1008 18:37:12.117011  568041 cri.go:89] found id: ""
	I1008 18:37:12.117050  568041 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-255508 -n multinode-255508
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-255508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.07s)

                                                
                                    
x
+
TestPreload (268.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-133603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1008 18:45:34.835148  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:45:51.765047  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:46:38.896660  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-133603 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.462686522s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-133603 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-133603 image pull gcr.io/k8s-minikube/busybox: (2.228280384s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-133603
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-133603: exit status 82 (2m0.466983838s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-133603"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-133603 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-08 18:49:19.76749593 +0000 UTC m=+4553.256648130
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-133603 -n test-preload-133603
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-133603 -n test-preload-133603: exit status 3 (18.523893682s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 18:49:38.286731  572846 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host
	E1008 18:49:38.286754  572846 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.53:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-133603" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-133603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-133603
--- FAIL: TestPreload (268.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (521.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.596803905s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302431] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-302431" primary control-plane node in "kubernetes-upgrade-302431" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:51:32.605589  573944 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:51:32.608008  573944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:51:32.608026  573944 out.go:358] Setting ErrFile to fd 2...
	I1008 18:51:32.608034  573944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:51:32.608244  573944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:51:32.608989  573944 out.go:352] Setting JSON to false
	I1008 18:51:32.609968  573944 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9245,"bootTime":1728404248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:51:32.610059  573944 start.go:139] virtualization: kvm guest
	I1008 18:51:32.612200  573944 out.go:177] * [kubernetes-upgrade-302431] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:51:32.613716  573944 notify.go:220] Checking for updates...
	I1008 18:51:32.614901  573944 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:51:32.617195  573944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:51:32.618467  573944 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:51:32.620232  573944 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:51:32.622252  573944 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:51:32.624585  573944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:51:32.625933  573944 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:51:32.658239  573944 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 18:51:32.659269  573944 start.go:297] selected driver: kvm2
	I1008 18:51:32.659284  573944 start.go:901] validating driver "kvm2" against <nil>
	I1008 18:51:32.659294  573944 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:51:32.660016  573944 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:51:32.674883  573944 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:51:32.690501  573944 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:51:32.690557  573944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:51:32.690860  573944 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 18:51:32.690889  573944 cni.go:84] Creating CNI manager for ""
	I1008 18:51:32.690938  573944 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:51:32.690945  573944 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 18:51:32.691016  573944 start.go:340] cluster config:
	{Name:kubernetes-upgrade-302431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:51:32.691178  573944 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:51:32.692926  573944 out.go:177] * Starting "kubernetes-upgrade-302431" primary control-plane node in "kubernetes-upgrade-302431" cluster
	I1008 18:51:32.694166  573944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 18:51:32.694204  573944 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 18:51:32.694216  573944 cache.go:56] Caching tarball of preloaded images
	I1008 18:51:32.694295  573944 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:51:32.694309  573944 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 18:51:32.694811  573944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/config.json ...
	I1008 18:51:32.694845  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/config.json: {Name:mk4e1cefcbb62eda5b650069b5b1a3124ee2e398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:51:32.694977  573944 start.go:360] acquireMachinesLock for kubernetes-upgrade-302431: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:51:58.422839  573944 start.go:364] duration metric: took 25.727790436s to acquireMachinesLock for "kubernetes-upgrade-302431"
	I1008 18:51:58.422924  573944 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-302431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:51:58.423036  573944 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 18:51:58.426131  573944 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 18:51:58.426385  573944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:51:58.426445  573944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:51:58.443581  573944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1008 18:51:58.443978  573944 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:51:58.444571  573944 main.go:141] libmachine: Using API Version  1
	I1008 18:51:58.444587  573944 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:51:58.444960  573944 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:51:58.445158  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetMachineName
	I1008 18:51:58.445522  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:51:58.445687  573944 start.go:159] libmachine.API.Create for "kubernetes-upgrade-302431" (driver="kvm2")
	I1008 18:51:58.445715  573944 client.go:168] LocalClient.Create starting
	I1008 18:51:58.445747  573944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 18:51:58.445773  573944 main.go:141] libmachine: Decoding PEM data...
	I1008 18:51:58.445789  573944 main.go:141] libmachine: Parsing certificate...
	I1008 18:51:58.445846  573944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 18:51:58.445866  573944 main.go:141] libmachine: Decoding PEM data...
	I1008 18:51:58.445876  573944 main.go:141] libmachine: Parsing certificate...
	I1008 18:51:58.445892  573944 main.go:141] libmachine: Running pre-create checks...
	I1008 18:51:58.445900  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .PreCreateCheck
	I1008 18:51:58.446240  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetConfigRaw
	I1008 18:51:58.446621  573944 main.go:141] libmachine: Creating machine...
	I1008 18:51:58.446638  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .Create
	I1008 18:51:58.446763  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Creating KVM machine...
	I1008 18:51:58.447731  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found existing default KVM network
	I1008 18:51:58.448844  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:51:58.448687  574271 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:9b:05} reservation:<nil>}
	I1008 18:51:58.449701  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:51:58.449614  574271 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00024a470}
	I1008 18:51:58.449727  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | created network xml: 
	I1008 18:51:58.449740  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | <network>
	I1008 18:51:58.449752  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |   <name>mk-kubernetes-upgrade-302431</name>
	I1008 18:51:58.449762  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |   <dns enable='no'/>
	I1008 18:51:58.449772  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |   
	I1008 18:51:58.449783  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1008 18:51:58.449802  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |     <dhcp>
	I1008 18:51:58.449859  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1008 18:51:58.449877  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |     </dhcp>
	I1008 18:51:58.449886  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |   </ip>
	I1008 18:51:58.449894  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG |   
	I1008 18:51:58.449903  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | </network>
	I1008 18:51:58.449912  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | 
	I1008 18:51:58.455003  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | trying to create private KVM network mk-kubernetes-upgrade-302431 192.168.50.0/24...
	I1008 18:51:58.520793  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | private KVM network mk-kubernetes-upgrade-302431 192.168.50.0/24 created
	I1008 18:51:58.520831  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431 ...
	I1008 18:51:58.520857  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:51:58.520772  574271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:51:58.520880  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 18:51:58.520947  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 18:51:58.787785  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:51:58.787654  574271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa...
	I1008 18:51:59.019556  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:51:59.019321  574271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/kubernetes-upgrade-302431.rawdisk...
	I1008 18:51:59.019601  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Writing magic tar header
	I1008 18:51:59.019622  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Writing SSH key tar header
	I1008 18:51:59.019635  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:51:59.019506  574271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431 ...
	I1008 18:51:59.019659  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431
	I1008 18:51:59.019672  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 18:51:59.019687  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431 (perms=drwx------)
	I1008 18:51:59.019710  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 18:51:59.019720  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:51:59.019734  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 18:51:59.019750  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 18:51:59.019761  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 18:51:59.019775  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 18:51:59.019785  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home/jenkins
	I1008 18:51:59.019798  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Checking permissions on dir: /home
	I1008 18:51:59.019806  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Skipping /home - not owner
	I1008 18:51:59.019854  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 18:51:59.019876  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 18:51:59.019885  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Creating domain...
	I1008 18:51:59.020996  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) define libvirt domain using xml: 
	I1008 18:51:59.021016  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) <domain type='kvm'>
	I1008 18:51:59.021025  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <name>kubernetes-upgrade-302431</name>
	I1008 18:51:59.021033  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <memory unit='MiB'>2200</memory>
	I1008 18:51:59.021042  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <vcpu>2</vcpu>
	I1008 18:51:59.021048  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <features>
	I1008 18:51:59.021053  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <acpi/>
	I1008 18:51:59.021060  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <apic/>
	I1008 18:51:59.021066  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <pae/>
	I1008 18:51:59.021071  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     
	I1008 18:51:59.021076  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   </features>
	I1008 18:51:59.021081  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <cpu mode='host-passthrough'>
	I1008 18:51:59.021087  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   
	I1008 18:51:59.021096  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   </cpu>
	I1008 18:51:59.021104  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <os>
	I1008 18:51:59.021132  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <type>hvm</type>
	I1008 18:51:59.021143  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <boot dev='cdrom'/>
	I1008 18:51:59.021148  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <boot dev='hd'/>
	I1008 18:51:59.021153  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <bootmenu enable='no'/>
	I1008 18:51:59.021157  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   </os>
	I1008 18:51:59.021162  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   <devices>
	I1008 18:51:59.021166  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <disk type='file' device='cdrom'>
	I1008 18:51:59.021180  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/boot2docker.iso'/>
	I1008 18:51:59.021191  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <target dev='hdc' bus='scsi'/>
	I1008 18:51:59.021207  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <readonly/>
	I1008 18:51:59.021223  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </disk>
	I1008 18:51:59.021233  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <disk type='file' device='disk'>
	I1008 18:51:59.021245  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 18:51:59.021272  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/kubernetes-upgrade-302431.rawdisk'/>
	I1008 18:51:59.021283  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <target dev='hda' bus='virtio'/>
	I1008 18:51:59.021289  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </disk>
	I1008 18:51:59.021294  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <interface type='network'>
	I1008 18:51:59.021300  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <source network='mk-kubernetes-upgrade-302431'/>
	I1008 18:51:59.021311  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <model type='virtio'/>
	I1008 18:51:59.021342  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </interface>
	I1008 18:51:59.021366  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <interface type='network'>
	I1008 18:51:59.021377  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <source network='default'/>
	I1008 18:51:59.021387  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <model type='virtio'/>
	I1008 18:51:59.021397  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </interface>
	I1008 18:51:59.021408  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <serial type='pty'>
	I1008 18:51:59.021418  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <target port='0'/>
	I1008 18:51:59.021429  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </serial>
	I1008 18:51:59.021444  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <console type='pty'>
	I1008 18:51:59.021460  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <target type='serial' port='0'/>
	I1008 18:51:59.021472  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </console>
	I1008 18:51:59.021480  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     <rng model='virtio'>
	I1008 18:51:59.021492  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)       <backend model='random'>/dev/random</backend>
	I1008 18:51:59.021506  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     </rng>
	I1008 18:51:59.021515  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     
	I1008 18:51:59.021522  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)     
	I1008 18:51:59.021530  573944 main.go:141] libmachine: (kubernetes-upgrade-302431)   </devices>
	I1008 18:51:59.021541  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) </domain>
	I1008 18:51:59.021559  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) 
	I1008 18:51:59.028422  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:9f:90:3c in network default
	I1008 18:51:59.029014  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Ensuring networks are active...
	I1008 18:51:59.029042  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:51:59.029633  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Ensuring network default is active
	I1008 18:51:59.029896  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Ensuring network mk-kubernetes-upgrade-302431 is active
	I1008 18:51:59.030357  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Getting domain xml...
	I1008 18:51:59.031011  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Creating domain...
	I1008 18:52:00.347861  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Waiting to get IP...
	I1008 18:52:00.348870  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:00.349455  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:00.349480  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:00.349363  574271 retry.go:31] will retry after 277.407484ms: waiting for machine to come up
	I1008 18:52:00.628067  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:00.628539  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:00.628567  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:00.628493  574271 retry.go:31] will retry after 234.799009ms: waiting for machine to come up
	I1008 18:52:00.865017  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:00.865558  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:00.865585  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:00.865459  574271 retry.go:31] will retry after 341.255008ms: waiting for machine to come up
	I1008 18:52:01.209049  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:01.209607  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:01.209652  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:01.209585  574271 retry.go:31] will retry after 598.980835ms: waiting for machine to come up
	I1008 18:52:01.810532  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:01.811032  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:01.811062  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:01.811000  574271 retry.go:31] will retry after 762.017973ms: waiting for machine to come up
	I1008 18:52:02.574399  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:02.574857  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:02.574885  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:02.574805  574271 retry.go:31] will retry after 818.937462ms: waiting for machine to come up
	I1008 18:52:03.395978  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:03.396497  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:03.396524  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:03.396437  574271 retry.go:31] will retry after 785.95027ms: waiting for machine to come up
	I1008 18:52:04.184482  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:04.184949  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:04.184981  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:04.184894  574271 retry.go:31] will retry after 1.121669371s: waiting for machine to come up
	I1008 18:52:05.308443  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:05.308948  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:05.308980  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:05.308915  574271 retry.go:31] will retry after 1.754736391s: waiting for machine to come up
	I1008 18:52:07.065856  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:07.066212  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:07.066242  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:07.066164  574271 retry.go:31] will retry after 1.752678225s: waiting for machine to come up
	I1008 18:52:08.820748  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:08.821254  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:08.821285  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:08.821196  574271 retry.go:31] will retry after 2.141466914s: waiting for machine to come up
	I1008 18:52:10.964196  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:10.964699  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:10.964731  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:10.964647  574271 retry.go:31] will retry after 2.553199883s: waiting for machine to come up
	I1008 18:52:13.519967  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:13.520378  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:13.520410  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:13.520327  574271 retry.go:31] will retry after 4.2543195s: waiting for machine to come up
	I1008 18:52:17.779288  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:17.779659  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find current IP address of domain kubernetes-upgrade-302431 in network mk-kubernetes-upgrade-302431
	I1008 18:52:17.779679  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | I1008 18:52:17.779633  574271 retry.go:31] will retry after 4.661468167s: waiting for machine to come up
	I1008 18:52:22.443271  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.443653  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has current primary IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.443671  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Found IP for machine: 192.168.50.39
	I1008 18:52:22.443683  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Reserving static IP address...
	I1008 18:52:22.444235  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-302431", mac: "52:54:00:6e:7c:3d", ip: "192.168.50.39"} in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.515986  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Getting to WaitForSSH function...
	I1008 18:52:22.516021  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Reserved static IP address: 192.168.50.39
	I1008 18:52:22.516036  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Waiting for SSH to be available...
	I1008 18:52:22.518445  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.518912  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:22.518942  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.519130  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Using SSH client type: external
	I1008 18:52:22.519162  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa (-rw-------)
	I1008 18:52:22.519206  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 18:52:22.519219  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | About to run SSH command:
	I1008 18:52:22.519253  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | exit 0
	I1008 18:52:22.645906  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | SSH cmd err, output: <nil>: 
	I1008 18:52:22.646115  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) KVM machine creation complete!
	I1008 18:52:22.646458  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetConfigRaw
	I1008 18:52:22.647080  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:22.647270  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:22.647464  573944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 18:52:22.647482  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetState
	I1008 18:52:22.648831  573944 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 18:52:22.648863  573944 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 18:52:22.648870  573944 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 18:52:22.648879  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:22.651012  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.651337  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:22.651363  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.651478  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:22.651653  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:22.651838  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:22.651944  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:22.652077  573944 main.go:141] libmachine: Using SSH client type: native
	I1008 18:52:22.652281  573944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1008 18:52:22.652293  573944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 18:52:22.761287  573944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:52:22.761313  573944 main.go:141] libmachine: Detecting the provisioner...
	I1008 18:52:22.761325  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:22.763934  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.764239  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:22.764261  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.764437  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:22.764625  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:22.764777  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:22.764891  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:22.765051  573944 main.go:141] libmachine: Using SSH client type: native
	I1008 18:52:22.765240  573944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1008 18:52:22.765252  573944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 18:52:22.878693  573944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 18:52:22.878755  573944 main.go:141] libmachine: found compatible host: buildroot
	I1008 18:52:22.878762  573944 main.go:141] libmachine: Provisioning with buildroot...
	I1008 18:52:22.878770  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetMachineName
	I1008 18:52:22.879024  573944 buildroot.go:166] provisioning hostname "kubernetes-upgrade-302431"
	I1008 18:52:22.879051  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetMachineName
	I1008 18:52:22.879182  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:22.881816  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.882181  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:22.882201  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:22.882363  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:22.882539  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:22.882698  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:22.882793  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:22.882935  573944 main.go:141] libmachine: Using SSH client type: native
	I1008 18:52:22.883108  573944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1008 18:52:22.883119  573944 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-302431 && echo "kubernetes-upgrade-302431" | sudo tee /etc/hostname
	I1008 18:52:23.009968  573944 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-302431
	
	I1008 18:52:23.010004  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.013047  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.013386  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.013431  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.013597  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.013790  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.013962  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.014133  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.014268  573944 main.go:141] libmachine: Using SSH client type: native
	I1008 18:52:23.014497  573944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1008 18:52:23.014521  573944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-302431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-302431/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-302431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:52:23.140960  573944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:52:23.141003  573944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:52:23.141076  573944 buildroot.go:174] setting up certificates
	I1008 18:52:23.141096  573944 provision.go:84] configureAuth start
	I1008 18:52:23.141117  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetMachineName
	I1008 18:52:23.141425  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetIP
	I1008 18:52:23.144630  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.145047  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.145075  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.145313  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.147725  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.148178  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.148213  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.148340  573944 provision.go:143] copyHostCerts
	I1008 18:52:23.148405  573944 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:52:23.148415  573944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:52:23.148464  573944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:52:23.148582  573944 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:52:23.148591  573944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:52:23.148612  573944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:52:23.148683  573944 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:52:23.148691  573944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:52:23.148708  573944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:52:23.148765  573944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-302431 san=[127.0.0.1 192.168.50.39 kubernetes-upgrade-302431 localhost minikube]
	I1008 18:52:23.270396  573944 provision.go:177] copyRemoteCerts
	I1008 18:52:23.270459  573944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:52:23.270484  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.272657  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.272912  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.272949  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.273079  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.273241  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.273378  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.273537  573944 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa Username:docker}
	I1008 18:52:23.359942  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:52:23.383277  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1008 18:52:23.405860  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:52:23.428287  573944 provision.go:87] duration metric: took 287.173965ms to configureAuth
	I1008 18:52:23.428313  573944 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:52:23.428471  573944 config.go:182] Loaded profile config "kubernetes-upgrade-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 18:52:23.428554  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.431167  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.431645  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.431673  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.431914  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.432128  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.432262  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.432411  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.432544  573944 main.go:141] libmachine: Using SSH client type: native
	I1008 18:52:23.432724  573944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1008 18:52:23.432743  573944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:52:23.672801  573944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:52:23.672834  573944 main.go:141] libmachine: Checking connection to Docker...
	I1008 18:52:23.672847  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetURL
	I1008 18:52:23.674280  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | Using libvirt version 6000000
	I1008 18:52:23.676317  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.676660  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.676684  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.676849  573944 main.go:141] libmachine: Docker is up and running!
	I1008 18:52:23.676861  573944 main.go:141] libmachine: Reticulating splines...
	I1008 18:52:23.676871  573944 client.go:171] duration metric: took 25.231144769s to LocalClient.Create
	I1008 18:52:23.676898  573944 start.go:167] duration metric: took 25.2312122s to libmachine.API.Create "kubernetes-upgrade-302431"
	I1008 18:52:23.676911  573944 start.go:293] postStartSetup for "kubernetes-upgrade-302431" (driver="kvm2")
	I1008 18:52:23.676925  573944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:52:23.676956  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:23.677191  573944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:52:23.677233  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.679235  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.679547  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.679577  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.679698  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.679895  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.680055  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.680173  573944 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa Username:docker}
	I1008 18:52:23.768303  573944 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:52:23.772189  573944 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:52:23.772217  573944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:52:23.772274  573944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:52:23.772368  573944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:52:23.772476  573944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:52:23.781118  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:52:23.803267  573944 start.go:296] duration metric: took 126.344291ms for postStartSetup
	I1008 18:52:23.803318  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetConfigRaw
	I1008 18:52:23.803916  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetIP
	I1008 18:52:23.806642  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.806970  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.806994  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.807304  573944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/config.json ...
	I1008 18:52:23.807504  573944 start.go:128] duration metric: took 25.384456651s to createHost
	I1008 18:52:23.807544  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.809499  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.809819  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.809848  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.809975  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.810151  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.810293  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.810411  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.810566  573944 main.go:141] libmachine: Using SSH client type: native
	I1008 18:52:23.810734  573944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I1008 18:52:23.810744  573944 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:52:23.922550  573944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728413543.898465185
	
	I1008 18:52:23.922576  573944 fix.go:216] guest clock: 1728413543.898465185
	I1008 18:52:23.922583  573944 fix.go:229] Guest: 2024-10-08 18:52:23.898465185 +0000 UTC Remote: 2024-10-08 18:52:23.807522049 +0000 UTC m=+51.257818939 (delta=90.943136ms)
	I1008 18:52:23.922602  573944 fix.go:200] guest clock delta is within tolerance: 90.943136ms
	I1008 18:52:23.922607  573944 start.go:83] releasing machines lock for "kubernetes-upgrade-302431", held for 25.499735545s
	I1008 18:52:23.922636  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:23.922940  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetIP
	I1008 18:52:23.925763  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.926131  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.926159  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.926309  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:23.926884  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:23.927054  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .DriverName
	I1008 18:52:23.927151  573944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:52:23.927191  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.927292  573944 ssh_runner.go:195] Run: cat /version.json
	I1008 18:52:23.927320  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHHostname
	I1008 18:52:23.929843  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.929952  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.930214  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.930240  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.930350  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:23.930365  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.930374  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:23.930532  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHPort
	I1008 18:52:23.930538  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.930726  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHKeyPath
	I1008 18:52:23.930737  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.930881  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetSSHUsername
	I1008 18:52:23.930892  573944 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa Username:docker}
	I1008 18:52:23.930979  573944 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kubernetes-upgrade-302431/id_rsa Username:docker}
	I1008 18:52:24.011344  573944 ssh_runner.go:195] Run: systemctl --version
	I1008 18:52:24.040751  573944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:52:24.203404  573944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:52:24.211607  573944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:52:24.211672  573944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:52:24.232513  573944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 18:52:24.232546  573944 start.go:495] detecting cgroup driver to use...
	I1008 18:52:24.232616  573944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:52:24.248319  573944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:52:24.262197  573944 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:52:24.262253  573944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:52:24.275696  573944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:52:24.289114  573944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:52:24.401119  573944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:52:24.544380  573944 docker.go:233] disabling docker service ...
	I1008 18:52:24.544452  573944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:52:24.558619  573944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:52:24.571816  573944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:52:24.711877  573944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:52:24.848857  573944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:52:24.865634  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:52:24.883336  573944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 18:52:24.883391  573944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:52:24.893530  573944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:52:24.893596  573944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:52:24.903742  573944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:52:24.913653  573944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:52:24.923818  573944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:52:24.934500  573944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:52:24.944335  573944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 18:52:24.944409  573944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 18:52:24.957976  573944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:52:24.968439  573944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:52:25.097871  573944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:52:25.201333  573944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:52:25.201419  573944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:52:25.206278  573944 start.go:563] Will wait 60s for crictl version
	I1008 18:52:25.206363  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:25.210066  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:52:25.249282  573944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:52:25.249382  573944 ssh_runner.go:195] Run: crio --version
	I1008 18:52:25.282868  573944 ssh_runner.go:195] Run: crio --version
	I1008 18:52:25.313621  573944 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 18:52:25.314667  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetIP
	I1008 18:52:25.317469  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:25.317826  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:52:13 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:52:25.317859  573944 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:52:25.318033  573944 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 18:52:25.322013  573944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:52:25.333987  573944 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-302431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:52:25.334096  573944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 18:52:25.334137  573944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:52:25.363093  573944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 18:52:25.363154  573944 ssh_runner.go:195] Run: which lz4
	I1008 18:52:25.366833  573944 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 18:52:25.370905  573944 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 18:52:25.370928  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 18:52:26.986760  573944 crio.go:462] duration metric: took 1.619954718s to copy over tarball
	I1008 18:52:26.986834  573944 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 18:52:29.635693  573944 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.648824634s)
	I1008 18:52:29.635751  573944 crio.go:469] duration metric: took 2.648944098s to extract the tarball
	I1008 18:52:29.635765  573944 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 18:52:29.678375  573944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:52:29.722799  573944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 18:52:29.722829  573944 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 18:52:29.722909  573944 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:29.722942  573944 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:29.722943  573944 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 18:52:29.722909  573944 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:52:29.722982  573944 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 18:52:29.723010  573944 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:29.722996  573944 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:29.722924  573944 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:29.724861  573944 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:29.724984  573944 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 18:52:29.725207  573944 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:29.724864  573944 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:52:29.725274  573944 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:29.725345  573944 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:29.725570  573944 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 18:52:29.725731  573944 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:29.886346  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:29.905298  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:29.914462  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:29.916213  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 18:52:29.918464  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:29.946303  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 18:52:29.949873  573944 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 18:52:29.949925  573944 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:29.949968  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:29.953396  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:30.020676  573944 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 18:52:30.020738  573944 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:30.020809  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:30.044127  573944 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 18:52:30.044184  573944 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:30.044242  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:30.049351  573944 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 18:52:30.049404  573944 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 18:52:30.049455  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:30.054580  573944 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 18:52:30.054626  573944 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:30.054677  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:30.076715  573944 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 18:52:30.076770  573944 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 18:52:30.076779  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:30.076815  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:30.088318  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:30.088334  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 18:52:30.088327  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:30.088404  573944 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 18:52:30.088389  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:30.088445  573944 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:30.088489  573944 ssh_runner.go:195] Run: which crictl
	I1008 18:52:30.158223  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:30.158254  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:30.158229  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 18:52:30.181861  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:30.210049  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:30.210114  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:30.210125  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 18:52:30.254529  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 18:52:30.318626  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:30.318766  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 18:52:30.336784  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:52:30.336877  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:52:30.371808  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:52:30.371845  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 18:52:30.390131  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 18:52:30.479532  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 18:52:30.479574  573944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:52:30.479626  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 18:52:30.479659  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 18:52:30.497613  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 18:52:30.497631  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 18:52:30.534417  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 18:52:30.534503  573944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 18:52:30.625125  573944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:52:30.767112  573944 cache_images.go:92] duration metric: took 1.044252828s to LoadCachedImages
	W1008 18:52:30.767213  573944 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1008 18:52:30.767233  573944 kubeadm.go:934] updating node { 192.168.50.39 8443 v1.20.0 crio true true} ...
	I1008 18:52:30.767363  573944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-302431 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:52:30.767438  573944 ssh_runner.go:195] Run: crio config
	I1008 18:52:30.821040  573944 cni.go:84] Creating CNI manager for ""
	I1008 18:52:30.821072  573944 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:52:30.821098  573944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:52:30.821131  573944 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-302431 NodeName:kubernetes-upgrade-302431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 18:52:30.821331  573944 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-302431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:52:30.821406  573944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 18:52:30.832190  573944 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:52:30.832273  573944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:52:30.842048  573944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1008 18:52:30.860239  573944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:52:30.878216  573944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1008 18:52:30.896285  573944 ssh_runner.go:195] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I1008 18:52:30.899947  573944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:52:30.911588  573944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:52:31.024316  573944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:52:31.040876  573944 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431 for IP: 192.168.50.39
	I1008 18:52:31.040904  573944 certs.go:194] generating shared ca certs ...
	I1008 18:52:31.040934  573944 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.041130  573944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:52:31.041194  573944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:52:31.041208  573944 certs.go:256] generating profile certs ...
	I1008 18:52:31.041286  573944 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.key
	I1008 18:52:31.041320  573944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.crt with IP's: []
	I1008 18:52:31.088817  573944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.crt ...
	I1008 18:52:31.088847  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.crt: {Name:mk15533662695288f737fe2f391c0490d64c952e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.089005  573944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.key ...
	I1008 18:52:31.089017  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.key: {Name:mk34895a396a9bf06e11244259b082c5254cfc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.089090  573944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key.8236a5c9
	I1008 18:52:31.089106  573944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt.8236a5c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.39]
	I1008 18:52:31.201850  573944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt.8236a5c9 ...
	I1008 18:52:31.201883  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt.8236a5c9: {Name:mk5a38dc2792bfd135fe059111e54a9eb9527d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.202037  573944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key.8236a5c9 ...
	I1008 18:52:31.202051  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key.8236a5c9: {Name:mked803e358b5123a1a1f9026c976a953aaf45e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.202120  573944 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt.8236a5c9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt
	I1008 18:52:31.202215  573944 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key.8236a5c9 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key
	I1008 18:52:31.202285  573944 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.key
	I1008 18:52:31.202314  573944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.crt with IP's: []
	I1008 18:52:31.303616  573944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.crt ...
	I1008 18:52:31.303651  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.crt: {Name:mk96aaafd4db58ebd5451d8c8c620ebdb6f64a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.303843  573944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.key ...
	I1008 18:52:31.303864  573944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.key: {Name:mk462e4c3c808fb6f675af012dd8e941a29a8e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:52:31.304102  573944 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:52:31.304150  573944 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:52:31.304166  573944 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:52:31.304196  573944 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:52:31.304226  573944 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:52:31.304257  573944 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:52:31.304310  573944 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:52:31.304991  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:52:31.333505  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:52:31.362019  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:52:31.390040  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:52:31.413415  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 18:52:31.440286  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:52:31.466248  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:52:31.491568  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:52:31.520400  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:52:31.543625  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:52:31.568095  573944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:52:31.590851  573944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:52:31.606639  573944 ssh_runner.go:195] Run: openssl version
	I1008 18:52:31.612063  573944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:52:31.622246  573944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:52:31.627483  573944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:52:31.627544  573944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:52:31.633399  573944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:52:31.643382  573944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:52:31.653371  573944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:52:31.657570  573944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:52:31.657620  573944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:52:31.662925  573944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:52:31.672841  573944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:52:31.682919  573944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:52:31.687537  573944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:52:31.687582  573944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:52:31.693157  573944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:52:31.702954  573944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:52:31.706755  573944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 18:52:31.706827  573944 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-302431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:52:31.706936  573944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:52:31.706998  573944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:52:31.743431  573944 cri.go:89] found id: ""
	I1008 18:52:31.743509  573944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 18:52:31.753434  573944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 18:52:31.762700  573944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:52:31.772340  573944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:52:31.772358  573944 kubeadm.go:157] found existing configuration files:
	
	I1008 18:52:31.772402  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:52:31.781339  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:52:31.781402  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:52:31.790343  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:52:31.798628  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:52:31.798683  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:52:31.807372  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:52:31.815753  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:52:31.815840  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:52:31.824707  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:52:31.833232  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:52:31.833287  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:52:31.844847  573944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 18:52:31.993551  573944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 18:52:31.993634  573944 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 18:52:32.149099  573944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 18:52:32.149261  573944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 18:52:32.149421  573944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 18:52:32.323989  573944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 18:52:32.494997  573944 out.go:235]   - Generating certificates and keys ...
	I1008 18:52:32.495125  573944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 18:52:32.495229  573944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 18:52:32.496774  573944 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 18:52:32.596542  573944 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 18:52:32.891101  573944 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 18:52:33.142742  573944 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 18:52:33.296389  573944 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 18:52:33.296557  573944 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I1008 18:52:33.468108  573944 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 18:52:33.468293  573944 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I1008 18:52:33.665411  573944 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 18:52:33.886733  573944 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 18:52:34.046179  573944 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 18:52:34.046424  573944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 18:52:34.163889  573944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 18:52:34.418762  573944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 18:52:34.546310  573944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 18:52:34.753544  573944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 18:52:34.773348  573944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 18:52:34.774139  573944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 18:52:34.774212  573944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 18:52:34.899222  573944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 18:52:34.900972  573944 out.go:235]   - Booting up control plane ...
	I1008 18:52:34.901113  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 18:52:34.905597  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 18:52:34.906604  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 18:52:34.907457  573944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 18:52:34.911368  573944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 18:53:14.905748  573944 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 18:53:14.906061  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:53:14.906352  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:53:19.906937  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:53:19.907258  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:53:29.906572  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:53:29.906786  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:53:49.906375  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:53:49.906678  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:54:29.909046  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:54:29.909498  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:54:29.909516  573944 kubeadm.go:310] 
	I1008 18:54:29.909606  573944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 18:54:29.909690  573944 kubeadm.go:310] 		timed out waiting for the condition
	I1008 18:54:29.909708  573944 kubeadm.go:310] 
	I1008 18:54:29.909788  573944 kubeadm.go:310] 	This error is likely caused by:
	I1008 18:54:29.909861  573944 kubeadm.go:310] 		- The kubelet is not running
	I1008 18:54:29.910093  573944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 18:54:29.910103  573944 kubeadm.go:310] 
	I1008 18:54:29.910392  573944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 18:54:29.910491  573944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 18:54:29.910601  573944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 18:54:29.910634  573944 kubeadm.go:310] 
	I1008 18:54:29.910933  573944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 18:54:29.911623  573944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 18:54:29.911651  573944 kubeadm.go:310] 
	I1008 18:54:29.911884  573944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 18:54:29.912091  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 18:54:29.912295  573944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 18:54:29.912495  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 18:54:29.912522  573944 kubeadm.go:310] 
	I1008 18:54:29.912919  573944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 18:54:29.913448  573944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 18:54:29.913573  573944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 18:54:29.913704  573944 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 18:54:29.913760  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 18:54:30.906102  573944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:54:30.923525  573944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:54:30.933788  573944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:54:30.933814  573944 kubeadm.go:157] found existing configuration files:
	
	I1008 18:54:30.933869  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:54:30.943454  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:54:30.943516  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:54:30.954492  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:54:30.963676  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:54:30.963717  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:54:30.973366  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:54:30.983733  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:54:30.983792  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:54:30.994523  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:54:31.004856  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:54:31.004900  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:54:31.015494  573944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 18:54:31.085548  573944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 18:54:31.085627  573944 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 18:54:31.230446  573944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 18:54:31.230582  573944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 18:54:31.230723  573944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 18:54:31.442163  573944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 18:54:31.445304  573944 out.go:235]   - Generating certificates and keys ...
	I1008 18:54:31.445410  573944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 18:54:31.445559  573944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 18:54:31.445689  573944 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 18:54:31.445791  573944 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 18:54:31.445890  573944 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 18:54:31.445978  573944 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 18:54:31.446069  573944 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 18:54:31.446157  573944 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 18:54:31.446265  573944 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 18:54:31.446433  573944 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 18:54:31.446488  573944 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 18:54:31.446578  573944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 18:54:31.528192  573944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 18:54:31.667896  573944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 18:54:31.934272  573944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 18:54:32.103890  573944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 18:54:32.121309  573944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 18:54:32.122596  573944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 18:54:32.122710  573944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 18:54:32.296864  573944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 18:54:32.298805  573944 out.go:235]   - Booting up control plane ...
	I1008 18:54:32.298931  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 18:54:32.314714  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 18:54:32.316162  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 18:54:32.317163  573944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 18:54:32.320028  573944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 18:55:12.323646  573944 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 18:55:12.323979  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:55:12.324244  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:55:17.324947  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:55:17.325202  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:55:27.326091  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:55:27.326424  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:55:47.324935  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:55:47.325143  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:56:27.324837  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:56:27.325100  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:56:27.325111  573944 kubeadm.go:310] 
	I1008 18:56:27.325166  573944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 18:56:27.325215  573944 kubeadm.go:310] 		timed out waiting for the condition
	I1008 18:56:27.325221  573944 kubeadm.go:310] 
	I1008 18:56:27.325271  573944 kubeadm.go:310] 	This error is likely caused by:
	I1008 18:56:27.325356  573944 kubeadm.go:310] 		- The kubelet is not running
	I1008 18:56:27.325524  573944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 18:56:27.325540  573944 kubeadm.go:310] 
	I1008 18:56:27.325679  573944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 18:56:27.325727  573944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 18:56:27.325772  573944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 18:56:27.325785  573944 kubeadm.go:310] 
	I1008 18:56:27.325950  573944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 18:56:27.326056  573944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 18:56:27.326068  573944 kubeadm.go:310] 
	I1008 18:56:27.326205  573944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 18:56:27.326312  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 18:56:27.326426  573944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 18:56:27.326532  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 18:56:27.326552  573944 kubeadm.go:310] 
	I1008 18:56:27.327161  573944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 18:56:27.327270  573944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 18:56:27.327356  573944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 18:56:27.327613  573944 kubeadm.go:394] duration metric: took 3m55.620790587s to StartCluster
	I1008 18:56:27.327674  573944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 18:56:27.327748  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 18:56:27.382485  573944 cri.go:89] found id: ""
	I1008 18:56:27.382522  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.382534  573944 logs.go:284] No container was found matching "kube-apiserver"
	I1008 18:56:27.382549  573944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 18:56:27.382628  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 18:56:27.433427  573944 cri.go:89] found id: ""
	I1008 18:56:27.433463  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.433476  573944 logs.go:284] No container was found matching "etcd"
	I1008 18:56:27.433485  573944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 18:56:27.433559  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 18:56:27.484070  573944 cri.go:89] found id: ""
	I1008 18:56:27.484112  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.484126  573944 logs.go:284] No container was found matching "coredns"
	I1008 18:56:27.484134  573944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 18:56:27.484220  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 18:56:27.533137  573944 cri.go:89] found id: ""
	I1008 18:56:27.533170  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.533182  573944 logs.go:284] No container was found matching "kube-scheduler"
	I1008 18:56:27.533190  573944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 18:56:27.533258  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 18:56:27.575449  573944 cri.go:89] found id: ""
	I1008 18:56:27.575485  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.575499  573944 logs.go:284] No container was found matching "kube-proxy"
	I1008 18:56:27.575507  573944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 18:56:27.575588  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 18:56:27.616446  573944 cri.go:89] found id: ""
	I1008 18:56:27.616474  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.616485  573944 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 18:56:27.616493  573944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 18:56:27.616549  573944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 18:56:27.665012  573944 cri.go:89] found id: ""
	I1008 18:56:27.665040  573944 logs.go:282] 0 containers: []
	W1008 18:56:27.665050  573944 logs.go:284] No container was found matching "kindnet"
	I1008 18:56:27.665061  573944 logs.go:123] Gathering logs for kubelet ...
	I1008 18:56:27.665074  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 18:56:27.731347  573944 logs.go:123] Gathering logs for dmesg ...
	I1008 18:56:27.731376  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 18:56:27.750087  573944 logs.go:123] Gathering logs for describe nodes ...
	I1008 18:56:27.750122  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 18:56:27.925091  573944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 18:56:27.925124  573944 logs.go:123] Gathering logs for CRI-O ...
	I1008 18:56:27.925142  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 18:56:28.071345  573944 logs.go:123] Gathering logs for container status ...
	I1008 18:56:28.071381  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 18:56:28.128011  573944 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 18:56:28.128067  573944 out.go:270] * 
	* 
	W1008 18:56:28.128122  573944 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 18:56:28.128135  573944 out.go:270] * 
	* 
	W1008 18:56:28.129150  573944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 18:56:28.132121  573944 out.go:201] 
	W1008 18:56:28.133227  573944 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 18:56:28.133275  573944 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 18:56:28.133303  573944 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 18:56:28.134788  573944 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-302431
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-302431: (6.818449747s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302431 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-302431 status --format={{.Host}}: exit status 7 (89.451731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.362034879s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-302431 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.386239ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302431] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-302431
	    minikube start -p kubernetes-upgrade-302431 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3024312 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-302431 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302431 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m38.605178544s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-08 19:00:11.209563895 +0000 UTC m=+5204.698716091
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-302431 -n kubernetes-upgrade-302431
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-302431 logs -n 25: (1.614401683s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-070950 --driver=kvm2                        | second-070950             | jenkins | v1.34.0 | 08 Oct 24 18:28 UTC | 08 Oct 24 18:28 UTC |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p second-070950                                      | second-070950             | jenkins | v1.34.0 | 08 Oct 24 18:28 UTC | 08 Oct 24 18:28 UTC |
	| delete  | -p first-057770                                       | first-057770              | jenkins | v1.34.0 | 08 Oct 24 18:28 UTC | 08 Oct 24 18:28 UTC |
	| start   | -p mount-start-1-085348                               | mount-start-1-085348      | jenkins | v1.34.0 | 08 Oct 24 18:28 UTC | 08 Oct 24 18:29 UTC |
	|         | --memory=2048 --mount                                 |                           |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                           |                           |         |         |                     |                     |
	|         | 6543 --mount-port                                     |                           |         |         |                     |                     |
	|         | 46464 --mount-uid 0                                   |                           |         |         |                     |                     |
	|         | --no-kubernetes --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| mount   | /home/jenkins:/minikube-host                          | mount-start-1-085348      | jenkins | v1.34.0 | 08 Oct 24 18:29 UTC |                     |
	|         | --profile mount-start-1-085348                        |                           |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                           |                           |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                            |                           |         |         |                     |                     |
	|         | --port 46464 --type 9p --uid 0                        |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-254330                          | force-systemd-flag-254330 | jenkins | v1.34.0 | 08 Oct 24 18:56 UTC | 08 Oct 24 18:56 UTC |
	| start   | -p kubernetes-upgrade-302431                          | kubernetes-upgrade-302431 | jenkins | v1.34.0 | 08 Oct 24 18:56 UTC | 08 Oct 24 18:57 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p cert-options-773474                                | cert-options-773474       | jenkins | v1.34.0 | 08 Oct 24 18:56 UTC | 08 Oct 24 18:57 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-038693 sudo                           | NoKubernetes-038693       | jenkins | v1.34.0 | 08 Oct 24 18:56 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-038693                                | NoKubernetes-038693       | jenkins | v1.34.0 | 08 Oct 24 18:56 UTC | 08 Oct 24 18:57 UTC |
	| start   | -p NoKubernetes-038693                                | NoKubernetes-038693       | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:57 UTC |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302431                          | kubernetes-upgrade-302431 | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302431                          | kubernetes-upgrade-302431 | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 19:00 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-773474 ssh                               | cert-options-773474       | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:57 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-773474 -- sudo                        | cert-options-773474       | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:57 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-773474                                | cert-options-773474       | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:57 UTC |
	| start   | -p old-k8s-version-256554                             | old-k8s-version-256554    | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-038693 sudo                           | NoKubernetes-038693       | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                | NoKubernetes-038693       | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                  | no-preload-966632         | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-439352                             | cert-expiration-439352    | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                             | cert-expiration-439352    | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                 | embed-certs-783146        | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632            | no-preload-966632         | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-966632                                  | no-preload-966632         | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:59:35
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:59:35.728191  582660 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:59:35.728299  582660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:59:35.728308  582660 out.go:358] Setting ErrFile to fd 2...
	I1008 18:59:35.728313  582660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:59:35.728503  582660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:59:35.729082  582660 out.go:352] Setting JSON to false
	I1008 18:59:35.730152  582660 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9728,"bootTime":1728404248,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:59:35.730220  582660 start.go:139] virtualization: kvm guest
	I1008 18:59:35.732212  582660 out.go:177] * [embed-certs-783146] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:59:35.733358  582660 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:59:35.733426  582660 notify.go:220] Checking for updates...
	I1008 18:59:35.735477  582660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:59:35.736853  582660 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:59:35.738004  582660 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:59:35.739235  582660 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:59:35.740389  582660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:59:35.741815  582660 config.go:182] Loaded profile config "kubernetes-upgrade-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:59:35.741925  582660 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:59:35.742020  582660 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 18:59:35.742109  582660 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:59:35.777485  582660 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 18:59:35.778687  582660 start.go:297] selected driver: kvm2
	I1008 18:59:35.778700  582660 start.go:901] validating driver "kvm2" against <nil>
	I1008 18:59:35.778713  582660 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:59:35.779436  582660 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:59:35.779519  582660 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:59:35.794540  582660 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:59:35.794600  582660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:59:35.794871  582660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:59:35.794907  582660 cni.go:84] Creating CNI manager for ""
	I1008 18:59:35.794961  582660 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:59:35.794973  582660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 18:59:35.795044  582660 start.go:340] cluster config:
	{Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:59:35.795149  582660 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:59:35.797436  582660 out.go:177] * Starting "embed-certs-783146" primary control-plane node in "embed-certs-783146" cluster
	I1008 18:59:35.798545  582660 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:59:35.798585  582660 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:59:35.798594  582660 cache.go:56] Caching tarball of preloaded images
	I1008 18:59:35.798669  582660 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:59:35.798682  582660 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:59:35.798764  582660 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 18:59:35.798780  582660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json: {Name:mke9ed3180ff015830d2a45cee14a6d90dce6180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:59:35.798900  582660 start.go:360] acquireMachinesLock for embed-certs-783146: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:59:35.798929  582660 start.go:364] duration metric: took 14.523µs to acquireMachinesLock for "embed-certs-783146"
	I1008 18:59:35.798942  582660 start.go:93] Provisioning new machine with config: &{Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:59:35.798994  582660 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 18:59:38.056763  581274 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.465273678s)
	I1008 18:59:38.056802  581274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:59:38.056861  581274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:59:38.064261  581274 start.go:563] Will wait 60s for crictl version
	I1008 18:59:38.064318  581274 ssh_runner.go:195] Run: which crictl
	I1008 18:59:38.068456  581274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:59:38.112654  581274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:59:38.112748  581274 ssh_runner.go:195] Run: crio --version
	I1008 18:59:38.144982  581274 ssh_runner.go:195] Run: crio --version
	I1008 18:59:38.176356  581274 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:59:36.420092  581808 pod_ready.go:103] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"False"
	I1008 18:59:38.420873  581808 pod_ready.go:103] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"False"
	I1008 18:59:40.419627  581808 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 18:59:40.419651  581808 pod_ready.go:82] duration metric: took 8.506347858s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.419660  581808 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.424637  581808 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 18:59:40.424660  581808 pod_ready.go:82] duration metric: took 4.992945ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.424671  581808 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.428999  581808 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 18:59:40.429019  581808 pod_ready.go:82] duration metric: took 4.341053ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.429030  581808 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.433194  581808 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 18:59:40.433209  581808 pod_ready.go:82] duration metric: took 4.172604ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.433216  581808 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.437029  581808 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 18:59:40.437044  581808 pod_ready.go:82] duration metric: took 3.822614ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.437051  581808 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:35.800492  582660 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 18:59:35.800637  582660 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 18:59:35.800680  582660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:59:35.815009  582660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1008 18:59:35.815445  582660 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:59:35.815976  582660 main.go:141] libmachine: Using API Version  1
	I1008 18:59:35.815999  582660 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:59:35.816333  582660 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:59:35.816584  582660 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 18:59:35.816753  582660 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 18:59:35.816900  582660 start.go:159] libmachine.API.Create for "embed-certs-783146" (driver="kvm2")
	I1008 18:59:35.816931  582660 client.go:168] LocalClient.Create starting
	I1008 18:59:35.816958  582660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 18:59:35.817002  582660 main.go:141] libmachine: Decoding PEM data...
	I1008 18:59:35.817025  582660 main.go:141] libmachine: Parsing certificate...
	I1008 18:59:35.817135  582660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 18:59:35.817165  582660 main.go:141] libmachine: Decoding PEM data...
	I1008 18:59:35.817186  582660 main.go:141] libmachine: Parsing certificate...
	I1008 18:59:35.817210  582660 main.go:141] libmachine: Running pre-create checks...
	I1008 18:59:35.817223  582660 main.go:141] libmachine: (embed-certs-783146) Calling .PreCreateCheck
	I1008 18:59:35.817599  582660 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 18:59:35.818036  582660 main.go:141] libmachine: Creating machine...
	I1008 18:59:35.818055  582660 main.go:141] libmachine: (embed-certs-783146) Calling .Create
	I1008 18:59:35.818195  582660 main.go:141] libmachine: (embed-certs-783146) Creating KVM machine...
	I1008 18:59:35.819408  582660 main.go:141] libmachine: (embed-certs-783146) DBG | found existing default KVM network
	I1008 18:59:35.820643  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:35.820484  582683 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:fc:33} reservation:<nil>}
	I1008 18:59:35.821499  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:35.821425  582683 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c6:50:f1} reservation:<nil>}
	I1008 18:59:35.822729  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:35.822673  582683 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:3e:fe} reservation:<nil>}
	I1008 18:59:35.823860  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:35.823791  582683 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a97c0}
	I1008 18:59:35.823899  582660 main.go:141] libmachine: (embed-certs-783146) DBG | created network xml: 
	I1008 18:59:35.823915  582660 main.go:141] libmachine: (embed-certs-783146) DBG | <network>
	I1008 18:59:35.823928  582660 main.go:141] libmachine: (embed-certs-783146) DBG |   <name>mk-embed-certs-783146</name>
	I1008 18:59:35.823946  582660 main.go:141] libmachine: (embed-certs-783146) DBG |   <dns enable='no'/>
	I1008 18:59:35.823954  582660 main.go:141] libmachine: (embed-certs-783146) DBG |   
	I1008 18:59:35.823959  582660 main.go:141] libmachine: (embed-certs-783146) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1008 18:59:35.823966  582660 main.go:141] libmachine: (embed-certs-783146) DBG |     <dhcp>
	I1008 18:59:35.823973  582660 main.go:141] libmachine: (embed-certs-783146) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1008 18:59:35.823981  582660 main.go:141] libmachine: (embed-certs-783146) DBG |     </dhcp>
	I1008 18:59:35.823989  582660 main.go:141] libmachine: (embed-certs-783146) DBG |   </ip>
	I1008 18:59:35.824011  582660 main.go:141] libmachine: (embed-certs-783146) DBG |   
	I1008 18:59:35.824033  582660 main.go:141] libmachine: (embed-certs-783146) DBG | </network>
	I1008 18:59:35.824057  582660 main.go:141] libmachine: (embed-certs-783146) DBG | 
	I1008 18:59:35.828594  582660 main.go:141] libmachine: (embed-certs-783146) DBG | trying to create private KVM network mk-embed-certs-783146 192.168.72.0/24...
	I1008 18:59:35.893266  582660 main.go:141] libmachine: (embed-certs-783146) DBG | private KVM network mk-embed-certs-783146 192.168.72.0/24 created
	I1008 18:59:35.893300  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:35.893222  582683 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:59:35.893314  582660 main.go:141] libmachine: (embed-certs-783146) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146 ...
	I1008 18:59:35.893332  582660 main.go:141] libmachine: (embed-certs-783146) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 18:59:35.893527  582660 main.go:141] libmachine: (embed-certs-783146) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 18:59:36.181130  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:36.180984  582683 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa...
	I1008 18:59:36.421837  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:36.421734  582683 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/embed-certs-783146.rawdisk...
	I1008 18:59:36.421861  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Writing magic tar header
	I1008 18:59:36.421873  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Writing SSH key tar header
	I1008 18:59:36.421881  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:36.421850  582683 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146 ...
	I1008 18:59:36.421966  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146
	I1008 18:59:36.421985  582660 main.go:141] libmachine: (embed-certs-783146) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146 (perms=drwx------)
	I1008 18:59:36.422001  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 18:59:36.422013  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:59:36.422020  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 18:59:36.422028  582660 main.go:141] libmachine: (embed-certs-783146) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 18:59:36.422035  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 18:59:36.422041  582660 main.go:141] libmachine: (embed-certs-783146) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 18:59:36.422059  582660 main.go:141] libmachine: (embed-certs-783146) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 18:59:36.422066  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home/jenkins
	I1008 18:59:36.422075  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Checking permissions on dir: /home
	I1008 18:59:36.422081  582660 main.go:141] libmachine: (embed-certs-783146) DBG | Skipping /home - not owner
	I1008 18:59:36.422091  582660 main.go:141] libmachine: (embed-certs-783146) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 18:59:36.422098  582660 main.go:141] libmachine: (embed-certs-783146) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 18:59:36.422104  582660 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 18:59:36.423202  582660 main.go:141] libmachine: (embed-certs-783146) define libvirt domain using xml: 
	I1008 18:59:36.423221  582660 main.go:141] libmachine: (embed-certs-783146) <domain type='kvm'>
	I1008 18:59:36.423230  582660 main.go:141] libmachine: (embed-certs-783146)   <name>embed-certs-783146</name>
	I1008 18:59:36.423236  582660 main.go:141] libmachine: (embed-certs-783146)   <memory unit='MiB'>2200</memory>
	I1008 18:59:36.423244  582660 main.go:141] libmachine: (embed-certs-783146)   <vcpu>2</vcpu>
	I1008 18:59:36.423250  582660 main.go:141] libmachine: (embed-certs-783146)   <features>
	I1008 18:59:36.423260  582660 main.go:141] libmachine: (embed-certs-783146)     <acpi/>
	I1008 18:59:36.423268  582660 main.go:141] libmachine: (embed-certs-783146)     <apic/>
	I1008 18:59:36.423277  582660 main.go:141] libmachine: (embed-certs-783146)     <pae/>
	I1008 18:59:36.423286  582660 main.go:141] libmachine: (embed-certs-783146)     
	I1008 18:59:36.423295  582660 main.go:141] libmachine: (embed-certs-783146)   </features>
	I1008 18:59:36.423303  582660 main.go:141] libmachine: (embed-certs-783146)   <cpu mode='host-passthrough'>
	I1008 18:59:36.423339  582660 main.go:141] libmachine: (embed-certs-783146)   
	I1008 18:59:36.423367  582660 main.go:141] libmachine: (embed-certs-783146)   </cpu>
	I1008 18:59:36.423376  582660 main.go:141] libmachine: (embed-certs-783146)   <os>
	I1008 18:59:36.423390  582660 main.go:141] libmachine: (embed-certs-783146)     <type>hvm</type>
	I1008 18:59:36.423401  582660 main.go:141] libmachine: (embed-certs-783146)     <boot dev='cdrom'/>
	I1008 18:59:36.423410  582660 main.go:141] libmachine: (embed-certs-783146)     <boot dev='hd'/>
	I1008 18:59:36.423421  582660 main.go:141] libmachine: (embed-certs-783146)     <bootmenu enable='no'/>
	I1008 18:59:36.423430  582660 main.go:141] libmachine: (embed-certs-783146)   </os>
	I1008 18:59:36.423452  582660 main.go:141] libmachine: (embed-certs-783146)   <devices>
	I1008 18:59:36.423462  582660 main.go:141] libmachine: (embed-certs-783146)     <disk type='file' device='cdrom'>
	I1008 18:59:36.423489  582660 main.go:141] libmachine: (embed-certs-783146)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/boot2docker.iso'/>
	I1008 18:59:36.423514  582660 main.go:141] libmachine: (embed-certs-783146)       <target dev='hdc' bus='scsi'/>
	I1008 18:59:36.423523  582660 main.go:141] libmachine: (embed-certs-783146)       <readonly/>
	I1008 18:59:36.423534  582660 main.go:141] libmachine: (embed-certs-783146)     </disk>
	I1008 18:59:36.423555  582660 main.go:141] libmachine: (embed-certs-783146)     <disk type='file' device='disk'>
	I1008 18:59:36.423572  582660 main.go:141] libmachine: (embed-certs-783146)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 18:59:36.423599  582660 main.go:141] libmachine: (embed-certs-783146)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/embed-certs-783146.rawdisk'/>
	I1008 18:59:36.423610  582660 main.go:141] libmachine: (embed-certs-783146)       <target dev='hda' bus='virtio'/>
	I1008 18:59:36.423619  582660 main.go:141] libmachine: (embed-certs-783146)     </disk>
	I1008 18:59:36.423628  582660 main.go:141] libmachine: (embed-certs-783146)     <interface type='network'>
	I1008 18:59:36.423638  582660 main.go:141] libmachine: (embed-certs-783146)       <source network='mk-embed-certs-783146'/>
	I1008 18:59:36.423651  582660 main.go:141] libmachine: (embed-certs-783146)       <model type='virtio'/>
	I1008 18:59:36.423661  582660 main.go:141] libmachine: (embed-certs-783146)     </interface>
	I1008 18:59:36.423668  582660 main.go:141] libmachine: (embed-certs-783146)     <interface type='network'>
	I1008 18:59:36.423679  582660 main.go:141] libmachine: (embed-certs-783146)       <source network='default'/>
	I1008 18:59:36.423687  582660 main.go:141] libmachine: (embed-certs-783146)       <model type='virtio'/>
	I1008 18:59:36.423699  582660 main.go:141] libmachine: (embed-certs-783146)     </interface>
	I1008 18:59:36.423707  582660 main.go:141] libmachine: (embed-certs-783146)     <serial type='pty'>
	I1008 18:59:36.423716  582660 main.go:141] libmachine: (embed-certs-783146)       <target port='0'/>
	I1008 18:59:36.423727  582660 main.go:141] libmachine: (embed-certs-783146)     </serial>
	I1008 18:59:36.423749  582660 main.go:141] libmachine: (embed-certs-783146)     <console type='pty'>
	I1008 18:59:36.423760  582660 main.go:141] libmachine: (embed-certs-783146)       <target type='serial' port='0'/>
	I1008 18:59:36.423771  582660 main.go:141] libmachine: (embed-certs-783146)     </console>
	I1008 18:59:36.423778  582660 main.go:141] libmachine: (embed-certs-783146)     <rng model='virtio'>
	I1008 18:59:36.423790  582660 main.go:141] libmachine: (embed-certs-783146)       <backend model='random'>/dev/random</backend>
	I1008 18:59:36.423802  582660 main.go:141] libmachine: (embed-certs-783146)     </rng>
	I1008 18:59:36.423819  582660 main.go:141] libmachine: (embed-certs-783146)     
	I1008 18:59:36.423836  582660 main.go:141] libmachine: (embed-certs-783146)     
	I1008 18:59:36.423848  582660 main.go:141] libmachine: (embed-certs-783146)   </devices>
	I1008 18:59:36.423858  582660 main.go:141] libmachine: (embed-certs-783146) </domain>
	I1008 18:59:36.423870  582660 main.go:141] libmachine: (embed-certs-783146) 
	I1008 18:59:36.427992  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:a2:77:a2 in network default
	I1008 18:59:36.428494  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:36.428511  582660 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 18:59:36.429108  582660 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 18:59:36.429358  582660 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 18:59:36.429774  582660 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 18:59:36.430479  582660 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 18:59:37.640640  582660 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 18:59:37.641345  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:37.641773  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:37.641822  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:37.641765  582683 retry.go:31] will retry after 233.618149ms: waiting for machine to come up
	I1008 18:59:37.877345  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:37.878005  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:37.878037  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:37.877946  582683 retry.go:31] will retry after 284.969949ms: waiting for machine to come up
	I1008 18:59:38.164511  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:38.165031  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:38.165056  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:38.164993  582683 retry.go:31] will retry after 444.628465ms: waiting for machine to come up
	I1008 18:59:38.611840  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:38.612405  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:38.612435  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:38.612368  582683 retry.go:31] will retry after 549.108208ms: waiting for machine to come up
	I1008 18:59:39.163743  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:39.164279  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:39.164305  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:39.164226  582683 retry.go:31] will retry after 471.137281ms: waiting for machine to come up
	I1008 18:59:39.636873  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:39.637362  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:39.637408  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:39.637262  582683 retry.go:31] will retry after 821.69092ms: waiting for machine to come up
	I1008 18:59:40.460742  582660 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 18:59:40.461254  582660 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 18:59:40.461282  582660 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 18:59:40.461211  582683 retry.go:31] will retry after 873.984221ms: waiting for machine to come up
	I1008 18:59:40.817732  581808 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 18:59:40.817760  581808 pod_ready.go:82] duration metric: took 380.701086ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 18:59:40.817770  581808 pod_ready.go:39] duration metric: took 10.924902007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:59:40.817800  581808 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:59:40.817879  581808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:59:40.836303  581808 api_server.go:72] duration metric: took 11.776697284s to wait for apiserver process to appear ...
	I1008 18:59:40.836333  581808 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:59:40.836357  581808 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 18:59:40.842417  581808 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 18:59:40.843599  581808 api_server.go:141] control plane version: v1.31.1
	I1008 18:59:40.843626  581808 api_server.go:131] duration metric: took 7.284456ms to wait for apiserver health ...
	I1008 18:59:40.843635  581808 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:59:41.020199  581808 system_pods.go:59] 7 kube-system pods found
	I1008 18:59:41.020229  581808 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 18:59:41.020235  581808 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 18:59:41.020238  581808 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 18:59:41.020242  581808 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 18:59:41.020252  581808 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 18:59:41.020259  581808 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 18:59:41.020264  581808 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 18:59:41.020272  581808 system_pods.go:74] duration metric: took 176.629694ms to wait for pod list to return data ...
	I1008 18:59:41.020282  581808 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:59:41.217939  581808 default_sa.go:45] found service account: "default"
	I1008 18:59:41.217969  581808 default_sa.go:55] duration metric: took 197.680009ms for default service account to be created ...
	I1008 18:59:41.217979  581808 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:59:41.420860  581808 system_pods.go:86] 7 kube-system pods found
	I1008 18:59:41.420902  581808 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 18:59:41.420911  581808 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 18:59:41.420917  581808 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 18:59:41.420923  581808 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 18:59:41.420928  581808 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 18:59:41.420933  581808 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 18:59:41.420938  581808 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 18:59:41.420948  581808 system_pods.go:126] duration metric: took 202.961551ms to wait for k8s-apps to be running ...
	I1008 18:59:41.420957  581808 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:59:41.421013  581808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:59:41.438209  581808 system_svc.go:56] duration metric: took 17.242856ms WaitForService to wait for kubelet
	I1008 18:59:41.438237  581808 kubeadm.go:582] duration metric: took 12.378638757s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:59:41.438262  581808 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:59:41.618887  581808 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:59:41.618924  581808 node_conditions.go:123] node cpu capacity is 2
	I1008 18:59:41.618939  581808 node_conditions.go:105] duration metric: took 180.670398ms to run NodePressure ...
	I1008 18:59:41.618967  581808 start.go:241] waiting for startup goroutines ...
	I1008 18:59:41.618977  581808 start.go:246] waiting for cluster config update ...
	I1008 18:59:41.618993  581808 start.go:255] writing updated cluster config ...
	I1008 18:59:41.619333  581808 ssh_runner.go:195] Run: rm -f paused
	I1008 18:59:41.682599  581808 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:59:41.685280  581808 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 18:59:38.177753  581274 main.go:141] libmachine: (kubernetes-upgrade-302431) Calling .GetIP
	I1008 18:59:38.180903  581274 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:59:38.181298  581274 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:7c:3d", ip: ""} in network mk-kubernetes-upgrade-302431: {Iface:virbr2 ExpiryTime:2024-10-08 19:57:02 +0000 UTC Type:0 Mac:52:54:00:6e:7c:3d Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-302431 Clientid:01:52:54:00:6e:7c:3d}
	I1008 18:59:38.181331  581274 main.go:141] libmachine: (kubernetes-upgrade-302431) DBG | domain kubernetes-upgrade-302431 has defined IP address 192.168.50.39 and MAC address 52:54:00:6e:7c:3d in network mk-kubernetes-upgrade-302431
	I1008 18:59:38.181592  581274 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 18:59:38.186119  581274 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-302431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:59:38.186245  581274 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:59:38.186306  581274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:59:38.233481  581274 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:59:38.233502  581274 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:59:38.233548  581274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:59:38.315989  581274 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:59:38.316018  581274 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:59:38.316029  581274 kubeadm.go:934] updating node { 192.168.50.39 8443 v1.31.1 crio true true} ...
	I1008 18:59:38.316163  581274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-302431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:59:38.316280  581274 ssh_runner.go:195] Run: crio config
	I1008 18:59:38.645061  581274 cni.go:84] Creating CNI manager for ""
	I1008 18:59:38.645086  581274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:59:38.645104  581274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:59:38.645129  581274 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-302431 NodeName:kubernetes-upgrade-302431 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:59:38.645302  581274 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-302431"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:59:38.645366  581274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:59:38.698984  581274 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:59:38.699072  581274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:59:38.780637  581274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1008 18:59:38.839043  581274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:59:38.950504  581274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1008 18:59:39.010610  581274 ssh_runner.go:195] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I1008 18:59:39.015200  581274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:59:39.218961  581274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:59:39.239444  581274 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431 for IP: 192.168.50.39
	I1008 18:59:39.239478  581274 certs.go:194] generating shared ca certs ...
	I1008 18:59:39.239501  581274 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:59:39.239709  581274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:59:39.239775  581274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:59:39.239790  581274 certs.go:256] generating profile certs ...
	I1008 18:59:39.239898  581274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/client.key
	I1008 18:59:39.239970  581274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key.8236a5c9
	I1008 18:59:39.240024  581274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.key
	I1008 18:59:39.240180  581274 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:59:39.240221  581274 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:59:39.240235  581274 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:59:39.240277  581274 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:59:39.240321  581274 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:59:39.240353  581274 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:59:39.240406  581274 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:59:39.241250  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:59:39.272385  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:59:39.298434  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:59:39.335997  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:59:39.417826  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 18:59:39.485309  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:59:39.519808  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:59:39.547487  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kubernetes-upgrade-302431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:59:39.614455  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:59:39.650252  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:59:39.682621  581274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:59:39.708729  581274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:59:39.731278  581274 ssh_runner.go:195] Run: openssl version
	I1008 18:59:39.740848  581274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:59:39.754397  581274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:59:39.759399  581274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:59:39.759492  581274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:59:39.766348  581274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:59:39.777198  581274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:59:39.789622  581274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:59:39.794285  581274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:59:39.794369  581274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:59:39.801132  581274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:59:39.812468  581274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:59:39.824132  581274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:59:39.828676  581274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:59:39.828744  581274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:59:39.834994  581274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:59:39.845162  581274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:59:39.849807  581274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:59:39.855524  581274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:59:39.861146  581274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:59:39.867018  581274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:59:39.872427  581274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:59:39.880902  581274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:59:39.887335  581274 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-302431 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-302431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:59:39.887431  581274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:59:39.887478  581274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:59:39.949315  581274 cri.go:89] found id: "9ed02f8a5e7d5f2700856f567abfec1e47d042d887049a1b8da2b3c33b6a814a"
	I1008 18:59:39.949350  581274 cri.go:89] found id: "da898b882a951e4a54f08306cb3db72c84502d875fac8ae8e76baff3fd5107f6"
	I1008 18:59:39.949355  581274 cri.go:89] found id: "ecad2f4b3e27e951b105d3a4cc5e73a5f29677997df3bd48b6a1b64e6496b0f9"
	I1008 18:59:39.949360  581274 cri.go:89] found id: "1dc742f4452f52a976f86da2f3f6b9f40db23bc90a56be11aab4b777d60cad9f"
	I1008 18:59:39.949364  581274 cri.go:89] found id: "1cb27c889f5087238532cb88ce4ed352cf145b27e564f2064b10697c611048a3"
	I1008 18:59:39.949370  581274 cri.go:89] found id: "171728a11e6600323d6f7d034e799ceac86b7bb1e23d224ee855c3f1d39ece28"
	I1008 18:59:39.949373  581274 cri.go:89] found id: "51b967de59f09abf4f9bd92e92cbc72e23a9c49043ede65c0115dcfb5774d9af"
	I1008 18:59:39.949377  581274 cri.go:89] found id: "7df1dadd40d50c661e58ef617ca5b23bdafcc4a669487e51aea894c1279a4695"
	I1008 18:59:39.949381  581274 cri.go:89] found id: "ad7181cc01438a227981e3d19dc91d9b22b137c71b36745da3e290308f234316"
	I1008 18:59:39.949388  581274 cri.go:89] found id: "741a38cda711f61ead130d0661d1d0054905e9cbd1a0034b00db96982474c505"
	I1008 18:59:39.949392  581274 cri.go:89] found id: "212a39c84b035c276541081a802b5a34dab94a5ae64acd97eed5802616c93525"
	I1008 18:59:39.949396  581274 cri.go:89] found id: "f42cbe1a999fa7fd25cadc2869a66c80399b9c3c0d62db04a37c7c23324cc6e1"
	I1008 18:59:39.949400  581274 cri.go:89] found id: "325212bcf7cbf0c742e898795f92d277a475885b065b3327234eae34449e1757"
	I1008 18:59:39.949404  581274 cri.go:89] found id: "6ec355e89baffae69af0bebc24a79c079675c07d0c676c288aabff3b85bf9175"
	I1008 18:59:39.949411  581274 cri.go:89] found id: "6b010cb1d3b1599cd9a9842cf9518fdb55aef439bdcae530ec88ae2d96ab37b1"
	I1008 18:59:39.949419  581274 cri.go:89] found id: "f75245bb026384fcf1e95d6cb20cc448a95ff6db24e20281e21e9490fb458dc4"
	I1008 18:59:39.949423  581274 cri.go:89] found id: "79c7baf2009d8e32a889d4d08f7c5398f05913d38931fff7d55fe7b9531fd5d0"
	I1008 18:59:39.949429  581274 cri.go:89] found id: "bb09b696f8537c1e01dc2751cacb700ddda52ae923092ea997ebadc1b46e432b"
	I1008 18:59:39.949433  581274 cri.go:89] found id: ""
	I1008 18:59:39.949497  581274 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-302431 -n kubernetes-upgrade-302431
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-302431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-302431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-302431
--- FAIL: TestKubernetesUpgrade (521.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (76.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-078692 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-078692 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.523768463s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-078692] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-078692" primary control-plane node in "pause-078692" cluster
	* Updating the running kvm2 "pause-078692" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-078692" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:53:19.584436  575275 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:53:19.584547  575275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:53:19.584557  575275 out.go:358] Setting ErrFile to fd 2...
	I1008 18:53:19.584561  575275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:53:19.584735  575275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:53:19.585275  575275 out.go:352] Setting JSON to false
	I1008 18:53:19.586469  575275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9352,"bootTime":1728404248,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:53:19.586584  575275 start.go:139] virtualization: kvm guest
	I1008 18:53:19.589068  575275 out.go:177] * [pause-078692] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:53:19.590654  575275 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:53:19.590670  575275 notify.go:220] Checking for updates...
	I1008 18:53:19.593145  575275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:53:19.594635  575275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:53:19.595816  575275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:53:19.596838  575275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:53:19.597925  575275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:53:19.599560  575275 config.go:182] Loaded profile config "pause-078692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:53:19.600005  575275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:53:19.600062  575275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:53:19.615505  575275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I1008 18:53:19.616144  575275 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:53:19.616730  575275 main.go:141] libmachine: Using API Version  1
	I1008 18:53:19.616755  575275 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:53:19.617201  575275 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:53:19.617443  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:19.617801  575275 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:53:19.618268  575275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:53:19.618345  575275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:53:19.640543  575275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I1008 18:53:19.641109  575275 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:53:19.641725  575275 main.go:141] libmachine: Using API Version  1
	I1008 18:53:19.641756  575275 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:53:19.642178  575275 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:53:19.642393  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:19.689312  575275 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 18:53:19.690490  575275 start.go:297] selected driver: kvm2
	I1008 18:53:19.690506  575275 start.go:901] validating driver "kvm2" against &{Name:pause-078692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-078692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:53:19.690688  575275 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:53:19.691135  575275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:53:19.691221  575275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:53:19.707129  575275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:53:19.707859  575275 cni.go:84] Creating CNI manager for ""
	I1008 18:53:19.707929  575275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:53:19.708009  575275 start.go:340] cluster config:
	{Name:pause-078692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-078692 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:53:19.708190  575275 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:53:19.710107  575275 out.go:177] * Starting "pause-078692" primary control-plane node in "pause-078692" cluster
	I1008 18:53:19.711276  575275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:53:19.711329  575275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:53:19.711343  575275 cache.go:56] Caching tarball of preloaded images
	I1008 18:53:19.711426  575275 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:53:19.711439  575275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:53:19.711608  575275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/config.json ...
	I1008 18:53:19.711876  575275 start.go:360] acquireMachinesLock for pause-078692: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:53:35.758801  575275 start.go:364] duration metric: took 16.046873874s to acquireMachinesLock for "pause-078692"
	I1008 18:53:35.758907  575275 start.go:96] Skipping create...Using existing machine configuration
	I1008 18:53:35.758921  575275 fix.go:54] fixHost starting: 
	I1008 18:53:35.759352  575275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:53:35.759407  575275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:53:35.779551  575275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I1008 18:53:35.780066  575275 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:53:35.780632  575275 main.go:141] libmachine: Using API Version  1
	I1008 18:53:35.780666  575275 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:53:35.781069  575275 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:53:35.781312  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:35.781443  575275 main.go:141] libmachine: (pause-078692) Calling .GetState
	I1008 18:53:35.783036  575275 fix.go:112] recreateIfNeeded on pause-078692: state=Running err=<nil>
	W1008 18:53:35.783061  575275 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 18:53:35.784634  575275 out.go:177] * Updating the running kvm2 "pause-078692" VM ...
	I1008 18:53:35.785667  575275 machine.go:93] provisionDockerMachine start ...
	I1008 18:53:35.785696  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:35.785884  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:35.788475  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:35.788791  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:35.788815  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:35.788950  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:35.789110  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:35.789276  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:35.789407  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:35.789571  575275 main.go:141] libmachine: Using SSH client type: native
	I1008 18:53:35.789804  575275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I1008 18:53:35.789824  575275 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 18:53:35.898951  575275 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-078692
	
	I1008 18:53:35.898989  575275 main.go:141] libmachine: (pause-078692) Calling .GetMachineName
	I1008 18:53:35.899236  575275 buildroot.go:166] provisioning hostname "pause-078692"
	I1008 18:53:35.899267  575275 main.go:141] libmachine: (pause-078692) Calling .GetMachineName
	I1008 18:53:35.899441  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:35.902026  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:35.902362  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:35.902403  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:35.902505  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:35.902652  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:35.902807  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:35.902908  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:35.903028  575275 main.go:141] libmachine: Using SSH client type: native
	I1008 18:53:35.903250  575275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I1008 18:53:35.903268  575275 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-078692 && echo "pause-078692" | sudo tee /etc/hostname
	I1008 18:53:36.029707  575275 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-078692
	
	I1008 18:53:36.029753  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:36.032690  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.033033  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:36.033077  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.033345  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:36.033520  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:36.033667  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:36.033852  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:36.033993  575275 main.go:141] libmachine: Using SSH client type: native
	I1008 18:53:36.034243  575275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I1008 18:53:36.034265  575275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-078692' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-078692/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-078692' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:53:36.139099  575275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:53:36.139134  575275 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:53:36.139165  575275 buildroot.go:174] setting up certificates
	I1008 18:53:36.139174  575275 provision.go:84] configureAuth start
	I1008 18:53:36.139183  575275 main.go:141] libmachine: (pause-078692) Calling .GetMachineName
	I1008 18:53:36.139467  575275 main.go:141] libmachine: (pause-078692) Calling .GetIP
	I1008 18:53:36.142240  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.142661  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:36.142684  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.142819  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:36.145012  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.145350  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:36.145375  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.145493  575275 provision.go:143] copyHostCerts
	I1008 18:53:36.145560  575275 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:53:36.145574  575275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:53:36.145640  575275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:53:36.145752  575275 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:53:36.145762  575275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:53:36.145789  575275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:53:36.145867  575275 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:53:36.145876  575275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:53:36.145904  575275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:53:36.145972  575275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.pause-078692 san=[127.0.0.1 192.168.61.72 localhost minikube pause-078692]
	I1008 18:53:36.280096  575275 provision.go:177] copyRemoteCerts
	I1008 18:53:36.280169  575275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:53:36.280210  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:36.283424  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.283802  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:36.283840  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.284075  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:36.284282  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:36.284455  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:36.284598  575275 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/pause-078692/id_rsa Username:docker}
	I1008 18:53:36.376842  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:53:36.407887  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 18:53:36.437671  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:53:36.468125  575275 provision.go:87] duration metric: took 328.937803ms to configureAuth
	I1008 18:53:36.468154  575275 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:53:36.468365  575275 config.go:182] Loaded profile config "pause-078692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:53:36.468464  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:36.471652  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.472077  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:36.472107  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:36.472316  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:36.472473  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:36.472646  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:36.472788  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:36.472949  575275 main.go:141] libmachine: Using SSH client type: native
	I1008 18:53:36.473146  575275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I1008 18:53:36.473165  575275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:53:42.011367  575275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:53:42.011398  575275 machine.go:96] duration metric: took 6.225710522s to provisionDockerMachine
	I1008 18:53:42.011415  575275 start.go:293] postStartSetup for "pause-078692" (driver="kvm2")
	I1008 18:53:42.011429  575275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:53:42.011454  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:42.011787  575275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:53:42.011824  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:42.015265  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.015743  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:42.015773  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.016159  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:42.016380  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:42.016551  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:42.016671  575275 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/pause-078692/id_rsa Username:docker}
	I1008 18:53:42.109674  575275 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:53:42.115350  575275 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:53:42.115382  575275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:53:42.115446  575275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:53:42.115536  575275 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:53:42.115651  575275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:53:42.128750  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:53:42.156078  575275 start.go:296] duration metric: took 144.646678ms for postStartSetup
	I1008 18:53:42.156124  575275 fix.go:56] duration metric: took 6.397204853s for fixHost
	I1008 18:53:42.156147  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:42.159134  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.159499  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:42.159530  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.159733  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:42.159936  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:42.160129  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:42.160296  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:42.160475  575275 main.go:141] libmachine: Using SSH client type: native
	I1008 18:53:42.160657  575275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I1008 18:53:42.160667  575275 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:53:42.268080  575275 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728413622.256548546
	
	I1008 18:53:42.268111  575275 fix.go:216] guest clock: 1728413622.256548546
	I1008 18:53:42.268122  575275 fix.go:229] Guest: 2024-10-08 18:53:42.256548546 +0000 UTC Remote: 2024-10-08 18:53:42.156128643 +0000 UTC m=+22.618055361 (delta=100.419903ms)
	I1008 18:53:42.268168  575275 fix.go:200] guest clock delta is within tolerance: 100.419903ms
	I1008 18:53:42.268181  575275 start.go:83] releasing machines lock for "pause-078692", held for 6.509315562s
	I1008 18:53:42.268215  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:42.268500  575275 main.go:141] libmachine: (pause-078692) Calling .GetIP
	I1008 18:53:42.271853  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.272345  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:42.272364  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.272751  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:42.273250  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:42.273469  575275 main.go:141] libmachine: (pause-078692) Calling .DriverName
	I1008 18:53:42.273552  575275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:53:42.273605  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:42.273950  575275 ssh_runner.go:195] Run: cat /version.json
	I1008 18:53:42.273970  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHHostname
	I1008 18:53:42.277207  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.277596  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.277966  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:42.277990  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.278303  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:42.278353  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:42.278390  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:42.278553  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:42.278769  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:42.278819  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHPort
	I1008 18:53:42.278889  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHKeyPath
	I1008 18:53:42.278927  575275 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/pause-078692/id_rsa Username:docker}
	I1008 18:53:42.279359  575275 main.go:141] libmachine: (pause-078692) Calling .GetSSHUsername
	I1008 18:53:42.279482  575275 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/pause-078692/id_rsa Username:docker}
	I1008 18:53:42.364441  575275 ssh_runner.go:195] Run: systemctl --version
	I1008 18:53:42.389133  575275 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:53:42.548871  575275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:53:42.556644  575275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:53:42.556731  575275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:53:42.568607  575275 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 18:53:42.568641  575275 start.go:495] detecting cgroup driver to use...
	I1008 18:53:42.568714  575275 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:53:42.592431  575275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:53:42.607522  575275 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:53:42.607601  575275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:53:42.621381  575275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:53:42.635469  575275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:53:42.802667  575275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:53:42.953433  575275 docker.go:233] disabling docker service ...
	I1008 18:53:42.953519  575275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:53:42.969915  575275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:53:42.984011  575275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:53:43.141543  575275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:53:43.309074  575275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:53:43.329795  575275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:53:43.353352  575275 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 18:53:43.353426  575275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.364944  575275 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:53:43.365003  575275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.378962  575275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.392177  575275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.403510  575275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:53:43.415545  575275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.427003  575275 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.440579  575275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:53:43.453201  575275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:53:43.465983  575275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:53:43.476800  575275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:53:43.642750  575275 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:53:43.876996  575275 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:53:43.877092  575275 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:53:43.883226  575275 start.go:563] Will wait 60s for crictl version
	I1008 18:53:43.883288  575275 ssh_runner.go:195] Run: which crictl
	I1008 18:53:43.887570  575275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:53:43.924719  575275 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:53:43.924837  575275 ssh_runner.go:195] Run: crio --version
	I1008 18:53:43.967741  575275 ssh_runner.go:195] Run: crio --version
	I1008 18:53:43.998382  575275 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 18:53:43.999579  575275 main.go:141] libmachine: (pause-078692) Calling .GetIP
	I1008 18:53:44.002677  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:44.003272  575275 main.go:141] libmachine: (pause-078692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:0a:9a", ip: ""} in network mk-pause-078692: {Iface:virbr3 ExpiryTime:2024-10-08 19:52:39 +0000 UTC Type:0 Mac:52:54:00:7c:0a:9a Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-078692 Clientid:01:52:54:00:7c:0a:9a}
	I1008 18:53:44.003301  575275 main.go:141] libmachine: (pause-078692) DBG | domain pause-078692 has defined IP address 192.168.61.72 and MAC address 52:54:00:7c:0a:9a in network mk-pause-078692
	I1008 18:53:44.003697  575275 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 18:53:44.011162  575275 kubeadm.go:883] updating cluster {Name:pause-078692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-078692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:53:44.011339  575275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:53:44.011402  575275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:53:44.222217  575275 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:53:44.222251  575275 crio.go:433] Images already preloaded, skipping extraction
	I1008 18:53:44.222343  575275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:53:44.432813  575275 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 18:53:44.432852  575275 cache_images.go:84] Images are preloaded, skipping loading
	I1008 18:53:44.432864  575275 kubeadm.go:934] updating node { 192.168.61.72 8443 v1.31.1 crio true true} ...
	I1008 18:53:44.433082  575275 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-078692 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-078692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:53:44.433216  575275 ssh_runner.go:195] Run: crio config
	I1008 18:53:44.801514  575275 cni.go:84] Creating CNI manager for ""
	I1008 18:53:44.801548  575275 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:53:44.801569  575275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:53:44.801601  575275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.72 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-078692 NodeName:pause-078692 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 18:53:44.801809  575275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-078692"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:53:44.801889  575275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 18:53:44.843211  575275 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:53:44.843289  575275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:53:44.859489  575275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1008 18:53:44.987960  575275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:53:45.044028  575275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1008 18:53:45.080022  575275 ssh_runner.go:195] Run: grep 192.168.61.72	control-plane.minikube.internal$ /etc/hosts
	I1008 18:53:45.089586  575275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:53:45.343164  575275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:53:45.366955  575275 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692 for IP: 192.168.61.72
	I1008 18:53:45.366997  575275 certs.go:194] generating shared ca certs ...
	I1008 18:53:45.367021  575275 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:53:45.367269  575275 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:53:45.367349  575275 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:53:45.367376  575275 certs.go:256] generating profile certs ...
	I1008 18:53:45.367506  575275 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/client.key
	I1008 18:53:45.367629  575275 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/apiserver.key.0254f442
	I1008 18:53:45.367701  575275 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/proxy-client.key
	I1008 18:53:45.367908  575275 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:53:45.367969  575275 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:53:45.367983  575275 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:53:45.368037  575275 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:53:45.368072  575275 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:53:45.368119  575275 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:53:45.368196  575275 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:53:45.369162  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:53:45.407367  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:53:45.442342  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:53:45.481017  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:53:45.521235  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 18:53:45.553967  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:53:45.600031  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:53:45.638899  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/pause-078692/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:53:45.728089  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:53:45.775352  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:53:45.821890  575275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:53:45.853796  575275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:53:45.892187  575275 ssh_runner.go:195] Run: openssl version
	I1008 18:53:45.901017  575275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:53:45.921164  575275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:53:45.931306  575275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:53:45.931372  575275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:53:45.940817  575275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:53:45.953062  575275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:53:45.967959  575275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:53:45.974699  575275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:53:45.974745  575275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:53:45.984671  575275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:53:45.996245  575275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:53:46.009488  575275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:53:46.015831  575275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:53:46.015878  575275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:53:46.023811  575275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:53:46.037354  575275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:53:46.041819  575275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 18:53:46.048746  575275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 18:53:46.055306  575275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 18:53:46.063710  575275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 18:53:46.070930  575275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 18:53:46.076467  575275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 18:53:46.085307  575275 kubeadm.go:392] StartCluster: {Name:pause-078692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-078692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:53:46.085445  575275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:53:46.085529  575275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:53:46.149732  575275 cri.go:89] found id: "7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074"
	I1008 18:53:46.149767  575275 cri.go:89] found id: "6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb"
	I1008 18:53:46.149772  575275 cri.go:89] found id: "4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89"
	I1008 18:53:46.149777  575275 cri.go:89] found id: "b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0"
	I1008 18:53:46.149781  575275 cri.go:89] found id: "9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a"
	I1008 18:53:46.149787  575275 cri.go:89] found id: "88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2"
	I1008 18:53:46.149794  575275 cri.go:89] found id: "89a3f478a9212a6570436f278bc85f3071cdf8de84cdef590bd01c4106661f6a"
	I1008 18:53:46.149801  575275 cri.go:89] found id: "9cc643e4c2cf0a0cd91555c3ddbe85bcecd6eaa05a5dc3ee9d583129d92368ac"
	I1008 18:53:46.149806  575275 cri.go:89] found id: "019b713532b8f5182bd198f7a931f8f1b2cfde289d0ef55cb65b091270bc7337"
	I1008 18:53:46.149815  575275 cri.go:89] found id: "911c196c95e085a8344f391869cc288b24c1930365913ade10897dfe4b7d9cd0"
	I1008 18:53:46.149819  575275 cri.go:89] found id: "689ac22cea3540bc7eb0720afc4371b315078bc63efa0dc68e2554660d152b8e"
	I1008 18:53:46.149823  575275 cri.go:89] found id: "f105345c460dfafb4f0bd9517d9fd3a35739e4cd2dacf7c78a11de30d6bbade0"
	I1008 18:53:46.149827  575275 cri.go:89] found id: ""
	I1008 18:53:46.149878  575275 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-078692 -n pause-078692
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-078692 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-078692 logs -n 25: (1.385605682s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-133603         | test-preload-133603       | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:49 UTC |
	| start   | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:50 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:50 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:51 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:51 UTC |
	| start   | -p offline-crio-907125         | offline-crio-907125       | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:52 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302431   | kubernetes-upgrade-302431 | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-078692 --memory=2048  | pause-078692              | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:53 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-204592      | minikube                  | jenkins | v1.26.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:53 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-907125         | offline-crio-907125       | jenkins | v1.34.0 | 08 Oct 24 18:52 UTC | 08 Oct 24 18:52 UTC |
	| start   | -p running-upgrade-390529      | minikube                  | jenkins | v1.26.0 | 08 Oct 24 18:52 UTC | 08 Oct 24 18:54 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-078692                | pause-078692              | jenkins | v1.34.0 | 08 Oct 24 18:53 UTC | 08 Oct 24 18:54 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-204592 stop    | minikube                  | jenkins | v1.26.0 | 08 Oct 24 18:53 UTC | 08 Oct 24 18:53 UTC |
	| start   | -p stopped-upgrade-204592      | stopped-upgrade-204592    | jenkins | v1.34.0 | 08 Oct 24 18:53 UTC | 08 Oct 24 18:54 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-390529      | running-upgrade-390529    | jenkins | v1.34.0 | 08 Oct 24 18:54 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-204592      | stopped-upgrade-204592    | jenkins | v1.34.0 | 08 Oct 24 18:54 UTC | 08 Oct 24 18:54 UTC |
	| start   | -p force-systemd-env-193077    | force-systemd-env-193077  | jenkins | v1.34.0 | 08 Oct 24 18:54 UTC |                     |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:54:31
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:54:31.263755  576068 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:54:31.264011  576068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:54:31.264022  576068 out.go:358] Setting ErrFile to fd 2...
	I1008 18:54:31.264029  576068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:54:31.264246  576068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:54:31.264830  576068 out.go:352] Setting JSON to false
	I1008 18:54:31.265840  576068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9423,"bootTime":1728404248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:54:31.265955  576068 start.go:139] virtualization: kvm guest
	I1008 18:54:31.267936  576068 out.go:177] * [force-systemd-env-193077] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:54:31.269074  576068 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:54:31.269124  576068 notify.go:220] Checking for updates...
	I1008 18:54:31.271028  576068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:54:31.271974  576068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:54:31.272938  576068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:54:31.274000  576068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:54:31.275152  576068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1008 18:54:31.276594  576068 config.go:182] Loaded profile config "kubernetes-upgrade-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 18:54:31.276726  576068 config.go:182] Loaded profile config "pause-078692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:54:31.276811  576068 config.go:182] Loaded profile config "running-upgrade-390529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1008 18:54:31.276891  576068 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:54:31.313998  576068 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 18:54:31.315199  576068 start.go:297] selected driver: kvm2
	I1008 18:54:31.315212  576068 start.go:901] validating driver "kvm2" against <nil>
	I1008 18:54:31.315227  576068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:54:31.315891  576068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:54:31.315954  576068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:54:31.331888  576068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:54:31.331936  576068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:54:31.332247  576068 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 18:54:31.332284  576068 cni.go:84] Creating CNI manager for ""
	I1008 18:54:31.332333  576068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:54:31.332341  576068 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 18:54:31.332415  576068 start.go:340] cluster config:
	{Name:force-systemd-env-193077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-193077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:54:31.332514  576068 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:54:31.334065  576068 out.go:177] * Starting "force-systemd-env-193077" primary control-plane node in "force-systemd-env-193077" cluster
	I1008 18:54:31.335129  576068 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:54:31.335160  576068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:54:31.335172  576068 cache.go:56] Caching tarball of preloaded images
	I1008 18:54:31.335246  576068 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:54:31.335282  576068 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:54:31.335384  576068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/force-systemd-env-193077/config.json ...
	I1008 18:54:31.335406  576068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/force-systemd-env-193077/config.json: {Name:mk6ef5103ac80391c6920ed45224eb9c8fa9a7e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:54:31.335562  576068 start.go:360] acquireMachinesLock for force-systemd-env-193077: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:54:31.335597  576068 start.go:364] duration metric: took 18.56µs to acquireMachinesLock for "force-systemd-env-193077"
	I1008 18:54:31.335620  576068 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-193077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-193077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:54:31.335699  576068 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 18:54:29.909046  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:54:29.909498  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:54:29.909516  573944 kubeadm.go:310] 
	I1008 18:54:29.909606  573944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 18:54:29.909690  573944 kubeadm.go:310] 		timed out waiting for the condition
	I1008 18:54:29.909708  573944 kubeadm.go:310] 
	I1008 18:54:29.909788  573944 kubeadm.go:310] 	This error is likely caused by:
	I1008 18:54:29.909861  573944 kubeadm.go:310] 		- The kubelet is not running
	I1008 18:54:29.910093  573944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 18:54:29.910103  573944 kubeadm.go:310] 
	I1008 18:54:29.910392  573944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 18:54:29.910491  573944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 18:54:29.910601  573944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 18:54:29.910634  573944 kubeadm.go:310] 
	I1008 18:54:29.910933  573944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 18:54:29.911623  573944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 18:54:29.911651  573944 kubeadm.go:310] 
	I1008 18:54:29.911884  573944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 18:54:29.912091  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 18:54:29.912295  573944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 18:54:29.912495  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 18:54:29.912522  573944 kubeadm.go:310] 
	I1008 18:54:29.912919  573944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 18:54:29.913448  573944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 18:54:29.913573  573944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 18:54:29.913704  573944 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 18:54:29.913760  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 18:54:30.906102  573944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:54:30.923525  573944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:54:30.933788  573944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:54:30.933814  573944 kubeadm.go:157] found existing configuration files:
	
	I1008 18:54:30.933869  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:54:30.943454  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:54:30.943516  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:54:30.954492  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:54:30.963676  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:54:30.963717  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:54:30.973366  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:54:30.983733  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:54:30.983792  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:54:30.994523  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:54:31.004856  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:54:31.004900  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:54:31.015494  573944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 18:54:31.085548  573944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 18:54:31.085627  573944 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 18:54:31.230446  573944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 18:54:31.230582  573944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 18:54:31.230723  573944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 18:54:31.442163  573944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 18:54:29.975568  575275 pod_ready.go:93] pod "kube-apiserver-pause-078692" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:29.975600  575275 pod_ready.go:82] duration metric: took 400.072091ms for pod "kube-apiserver-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:29.975614  575275 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.375152  575275 pod_ready.go:93] pod "kube-controller-manager-pause-078692" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:30.375181  575275 pod_ready.go:82] duration metric: took 399.557558ms for pod "kube-controller-manager-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.375196  575275 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q8ntx" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.775303  575275 pod_ready.go:93] pod "kube-proxy-q8ntx" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:30.775329  575275 pod_ready.go:82] duration metric: took 400.125521ms for pod "kube-proxy-q8ntx" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.775339  575275 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:31.175511  575275 pod_ready.go:93] pod "kube-scheduler-pause-078692" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:31.175548  575275 pod_ready.go:82] duration metric: took 400.201452ms for pod "kube-scheduler-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:31.175559  575275 pod_ready.go:39] duration metric: took 1.759750066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:54:31.175577  575275 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:54:31.175641  575275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:54:31.189536  575275 api_server.go:72] duration metric: took 1.981986704s to wait for apiserver process to appear ...
	I1008 18:54:31.189569  575275 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:54:31.189615  575275 api_server.go:253] Checking apiserver healthz at https://192.168.61.72:8443/healthz ...
	I1008 18:54:31.194850  575275 api_server.go:279] https://192.168.61.72:8443/healthz returned 200:
	ok
	I1008 18:54:31.196182  575275 api_server.go:141] control plane version: v1.31.1
	I1008 18:54:31.196212  575275 api_server.go:131] duration metric: took 6.633398ms to wait for apiserver health ...
	I1008 18:54:31.196222  575275 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:54:31.379345  575275 system_pods.go:59] 6 kube-system pods found
	I1008 18:54:31.379382  575275 system_pods.go:61] "coredns-7c65d6cfc9-bzh6z" [5cf0a7a2-70e9-4f34-97d8-3c51d466b442] Running
	I1008 18:54:31.379389  575275 system_pods.go:61] "etcd-pause-078692" [0ec80229-076c-49de-bd5b-9243672f1d09] Running
	I1008 18:54:31.379395  575275 system_pods.go:61] "kube-apiserver-pause-078692" [b678a14a-1ab4-4c38-ac05-d6f14696e296] Running
	I1008 18:54:31.379401  575275 system_pods.go:61] "kube-controller-manager-pause-078692" [facd9af0-dee4-4e0b-8bdd-99fb64357042] Running
	I1008 18:54:31.379406  575275 system_pods.go:61] "kube-proxy-q8ntx" [105b25e7-cc00-415b-ac82-81a2bb828d60] Running
	I1008 18:54:31.379410  575275 system_pods.go:61] "kube-scheduler-pause-078692" [a400f672-e859-43d8-8eae-a1acb5ae9166] Running
	I1008 18:54:31.379418  575275 system_pods.go:74] duration metric: took 183.188485ms to wait for pod list to return data ...
	I1008 18:54:31.379428  575275 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:54:31.575062  575275 default_sa.go:45] found service account: "default"
	I1008 18:54:31.575096  575275 default_sa.go:55] duration metric: took 195.660169ms for default service account to be created ...
	I1008 18:54:31.575108  575275 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:54:31.776609  575275 system_pods.go:86] 6 kube-system pods found
	I1008 18:54:31.776638  575275 system_pods.go:89] "coredns-7c65d6cfc9-bzh6z" [5cf0a7a2-70e9-4f34-97d8-3c51d466b442] Running
	I1008 18:54:31.776647  575275 system_pods.go:89] "etcd-pause-078692" [0ec80229-076c-49de-bd5b-9243672f1d09] Running
	I1008 18:54:31.776657  575275 system_pods.go:89] "kube-apiserver-pause-078692" [b678a14a-1ab4-4c38-ac05-d6f14696e296] Running
	I1008 18:54:31.776661  575275 system_pods.go:89] "kube-controller-manager-pause-078692" [facd9af0-dee4-4e0b-8bdd-99fb64357042] Running
	I1008 18:54:31.776664  575275 system_pods.go:89] "kube-proxy-q8ntx" [105b25e7-cc00-415b-ac82-81a2bb828d60] Running
	I1008 18:54:31.776667  575275 system_pods.go:89] "kube-scheduler-pause-078692" [a400f672-e859-43d8-8eae-a1acb5ae9166] Running
	I1008 18:54:31.776674  575275 system_pods.go:126] duration metric: took 201.560624ms to wait for k8s-apps to be running ...
	I1008 18:54:31.776682  575275 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:54:31.776740  575275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:54:31.795037  575275 system_svc.go:56] duration metric: took 18.343413ms WaitForService to wait for kubelet
	I1008 18:54:31.795070  575275 kubeadm.go:582] duration metric: took 2.587526191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:54:31.795093  575275 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:54:31.975260  575275 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:54:31.975290  575275 node_conditions.go:123] node cpu capacity is 2
	I1008 18:54:31.975306  575275 node_conditions.go:105] duration metric: took 180.20631ms to run NodePressure ...
	I1008 18:54:31.975320  575275 start.go:241] waiting for startup goroutines ...
	I1008 18:54:31.975331  575275 start.go:246] waiting for cluster config update ...
	I1008 18:54:31.975341  575275 start.go:255] writing updated cluster config ...
	I1008 18:54:31.975730  575275 ssh_runner.go:195] Run: rm -f paused
	I1008 18:54:32.036000  575275 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:54:32.037692  575275 out.go:177] * Done! kubectl is now configured to use "pause-078692" cluster and "default" namespace by default
	I1008 18:54:31.445304  573944 out.go:235]   - Generating certificates and keys ...
	I1008 18:54:31.445410  573944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 18:54:31.445559  573944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 18:54:31.445689  573944 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 18:54:31.445791  573944 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 18:54:31.445890  573944 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 18:54:31.445978  573944 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 18:54:31.446069  573944 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 18:54:31.446157  573944 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 18:54:31.446265  573944 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 18:54:31.446433  573944 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 18:54:31.446488  573944 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 18:54:31.446578  573944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 18:54:31.528192  573944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 18:54:31.667896  573944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 18:54:31.934272  573944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 18:54:32.103890  573944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 18:54:32.121309  573944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 18:54:32.122596  573944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 18:54:32.122710  573944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 18:54:32.296864  573944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.767998960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413672767976298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33cab035-dc9d-4e50-b1fd-4737316bc3f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.768614059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac8b1511-84d8-4930-af8d-fcc671b1cb8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.768663105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac8b1511-84d8-4930-af8d-fcc671b1cb8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.768891564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac8b1511-84d8-4930-af8d-fcc671b1cb8e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.819211801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcba75ab-2c72-4355-a0cb-58fcce99e4bd name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.819287536Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcba75ab-2c72-4355-a0cb-58fcce99e4bd name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.820586508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8b1b6e3-d76a-4082-9129-640397e6953f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.820916076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413672820897795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8b1b6e3-d76a-4082-9129-640397e6953f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.821533566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ec3ad4d-1245-4932-9d18-ca81de8d8b0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.821582560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ec3ad4d-1245-4932-9d18-ca81de8d8b0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.821866234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ec3ad4d-1245-4932-9d18-ca81de8d8b0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.868448976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b85df5b-86d1-4466-9527-c1ec8d294c52 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.868552463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b85df5b-86d1-4466-9527-c1ec8d294c52 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.869962217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=821b7f22-4554-4f43-b11a-4189df568cb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.870632012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413672870601589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=821b7f22-4554-4f43-b11a-4189df568cb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.871444711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b720f92d-0402-4cde-a8d9-f519d11cd090 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.871521739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b720f92d-0402-4cde-a8d9-f519d11cd090 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.871898171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b720f92d-0402-4cde-a8d9-f519d11cd090 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.921647514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74677381-6e01-4509-8762-9fa1355a703d name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.921717404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74677381-6e01-4509-8762-9fa1355a703d name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.922850922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93368360-b947-4d0c-b51e-5e618142e6a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.923540065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413672923518314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93368360-b947-4d0c-b51e-5e618142e6a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.924392986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dac4d5a-2c4a-4549-ae2e-c648f06a4121 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.924493775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dac4d5a-2c4a-4549-ae2e-c648f06a4121 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:32 pause-078692 crio[2080]: time="2024-10-08 18:54:32.924746848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dac4d5a-2c4a-4549-ae2e-c648f06a4121 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e592741c5c23       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 seconds ago      Running             coredns                   2                   90cc09d6d4f1b       coredns-7c65d6cfc9-bzh6z
	edadd50d5d743       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 seconds ago      Running             kube-proxy                2                   935e6c908c219       kube-proxy-q8ntx
	45556f542fd5a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Running             etcd                      2                   c0fb2ce712a29       etcd-pause-078692
	d031aac2b9705       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   24 seconds ago      Running             kube-controller-manager   2                   c551edaa1d01d       kube-controller-manager-pause-078692
	bb7c327ca82c7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   24 seconds ago      Running             kube-apiserver            2                   2ff7d04ca1b71       kube-apiserver-pause-078692
	a259e92e95df9       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   26 seconds ago      Running             kube-scheduler            2                   048ed81d48f04       kube-scheduler-pause-078692
	7575b2d44fb69       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   47 seconds ago      Exited              coredns                   1                   90cc09d6d4f1b       coredns-7c65d6cfc9-bzh6z
	6bea03680d9de       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   48 seconds ago      Exited              kube-proxy                1                   935e6c908c219       kube-proxy-q8ntx
	4f4ad5c332bb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   48 seconds ago      Exited              etcd                      1                   c0fb2ce712a29       etcd-pause-078692
	b0c22f3d101a3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   48 seconds ago      Exited              kube-scheduler            1                   048ed81d48f04       kube-scheduler-pause-078692
	9b7138f6dbdfb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   48 seconds ago      Exited              kube-controller-manager   1                   c551edaa1d01d       kube-controller-manager-pause-078692
	88f40d58d918e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   48 seconds ago      Exited              kube-apiserver            1                   2ff7d04ca1b71       kube-apiserver-pause-078692
	
	
	==> coredns [7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54746 - 25435 "HINFO IN 2923528172057456216.7447597708856327474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012869661s
	
	
	==> coredns [9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33521 - 61906 "HINFO IN 6723085254346993799.2029043226398599707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034062921s
	
	
	==> describe nodes <==
	Name:               pause-078692
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-078692
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=pause-078692
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T18_53_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:53:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-078692
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:54:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.72
	  Hostname:    pause-078692
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e51371a6695400eacd206724c383d43
	  System UUID:                8e51371a-6695-400e-acd2-06724c383d43
	  Boot ID:                    51c1f199-1e84-440a-95fb-00abf7116444
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bzh6z                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-pause-078692                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-078692             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-078692    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-q8ntx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-pause-078692             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 44s                kube-proxy       
	  Normal  NodeHasSufficientMemory  94s                kubelet          Node pause-078692 status is now: NodeHasSufficientMemory
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    94s                kubelet          Node pause-078692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                kubelet          Node pause-078692 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node pause-078692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node pause-078692 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     88s                kubelet          Node pause-078692 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s                kubelet          Node pause-078692 status is now: NodeReady
	  Normal  RegisteredNode           84s                node-controller  Node pause-078692 event: Registered Node pause-078692 in Controller
	  Normal  CIDRAssignmentFailed     84s                cidrAllocator    Node pause-078692 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           41s                node-controller  Node pause-078692 event: Registered Node pause-078692 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-078692 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-078692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-078692 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-078692 event: Registered Node pause-078692 in Controller
	
	
	==> dmesg <==
	[  +9.771985] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.066744] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050613] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.198747] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.116754] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.284651] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.925483] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.662405] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.067683] kauditd_printk_skb: 158 callbacks suppressed
	[Oct 8 18:53] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.081451] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.786606] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +0.887867] kauditd_printk_skb: 49 callbacks suppressed
	[ +10.571972] kauditd_printk_skb: 47 callbacks suppressed
	[ +21.747734] systemd-fstab-generator[2004]: Ignoring "noauto" option for root device
	[  +0.173336] systemd-fstab-generator[2016]: Ignoring "noauto" option for root device
	[  +0.183152] systemd-fstab-generator[2030]: Ignoring "noauto" option for root device
	[  +0.153809] systemd-fstab-generator[2042]: Ignoring "noauto" option for root device
	[  +0.343600] systemd-fstab-generator[2071]: Ignoring "noauto" option for root device
	[  +1.641427] systemd-fstab-generator[2593]: Ignoring "noauto" option for root device
	[  +3.360224] kauditd_printk_skb: 195 callbacks suppressed
	[Oct 8 18:54] systemd-fstab-generator[3147]: Ignoring "noauto" option for root device
	[  +7.684300] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.392475] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[  +0.094455] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77] <==
	{"level":"info","ts":"2024-10-08T18:54:13.214814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-08T18:54:13.214938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-08T18:54:13.214982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a received MsgPreVoteResp from d2afe71ba7be449a at term 3"}
	{"level":"info","ts":"2024-10-08T18:54:13.215019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a became candidate at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.215108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a received MsgVoteResp from d2afe71ba7be449a at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.215145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a became leader at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.215189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d2afe71ba7be449a elected leader d2afe71ba7be449a at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.218452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:54:13.219173Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d2afe71ba7be449a","local-member-attributes":"{Name:pause-078692 ClientURLs:[https://192.168.61.72:2379]}","request-path":"/0/members/d2afe71ba7be449a/attributes","cluster-id":"2c86054c8ae24e65","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T18:54:13.219615Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:54:13.219890Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T18:54:13.219921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T18:54:13.220374Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T18:54:13.220436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T18:54:13.221340Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T18:54:13.221908Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.72:2379"}
	{"level":"info","ts":"2024-10-08T18:54:15.275582Z","caller":"traceutil/trace.go:171","msg":"trace[1292991759] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"149.846712ms","start":"2024-10-08T18:54:15.125704Z","end":"2024-10-08T18:54:15.275551Z","steps":["trace[1292991759] 'process raft request'  (duration: 147.495254ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:16.969743Z","caller":"traceutil/trace.go:171","msg":"trace[1643727734] transaction","detail":"{read_only:false; number_of_response:0; response_revision:477; }","duration":"125.82241ms","start":"2024-10-08T18:54:16.843903Z","end":"2024-10-08T18:54:16.969726Z","steps":["trace[1643727734] 'process raft request'  (duration: 83.57925ms)","trace[1643727734] 'compare'  (duration: 41.936844ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-08T18:54:20.619725Z","caller":"traceutil/trace.go:171","msg":"trace[139780329] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"342.423944ms","start":"2024-10-08T18:54:20.277279Z","end":"2024-10-08T18:54:20.619703Z","steps":["trace[139780329] 'process raft request'  (duration: 342.119745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:54:20.620359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:54:20.277258Z","time spent":"342.617494ms","remote":"127.0.0.1:36960","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4954,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-bzh6z\" mod_revision:474 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-bzh6z\" value_size:4895 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-bzh6z\" > >"}
	{"level":"warn","ts":"2024-10-08T18:54:20.813524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.105684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-078692\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-10-08T18:54:20.813620Z","caller":"traceutil/trace.go:171","msg":"trace[216441678] range","detail":"{range_begin:/registry/minions/pause-078692; range_end:; response_count:1; response_revision:523; }","duration":"160.217961ms","start":"2024-10-08T18:54:20.653388Z","end":"2024-10-08T18:54:20.813606Z","steps":["trace[216441678] 'range keys from in-memory index tree'  (duration: 160.033059ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:20.814014Z","caller":"traceutil/trace.go:171","msg":"trace[1939006368] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"154.619563ms","start":"2024-10-08T18:54:20.659385Z","end":"2024-10-08T18:54:20.814005Z","steps":["trace[1939006368] 'process raft request'  (duration: 151.640208ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:20.814784Z","caller":"traceutil/trace.go:171","msg":"trace[2133218642] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"154.008386ms","start":"2024-10-08T18:54:20.660765Z","end":"2024-10-08T18:54:20.814773Z","steps":["trace[2133218642] 'process raft request'  (duration: 153.970229ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:20.815209Z","caller":"traceutil/trace.go:171","msg":"trace[1822846197] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"155.588006ms","start":"2024-10-08T18:54:20.659610Z","end":"2024-10-08T18:54:20.815198Z","steps":["trace[1822846197] 'process raft request'  (duration: 155.063077ms)"],"step_count":1}
	
	
	==> etcd [4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89] <==
	{"level":"info","ts":"2024-10-08T18:53:49.971823Z","caller":"traceutil/trace.go:171","msg":"trace[1995130158] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:395; }","duration":"893.518644ms","start":"2024-10-08T18:53:49.078293Z","end":"2024-10-08T18:53:49.971812Z","steps":["trace[1995130158] 'agreement among raft nodes before linearized reading'  (duration: 893.219254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.971944Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.078269Z","time spent":"893.666138ms","remote":"127.0.0.1:57780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":444,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"warn","ts":"2024-10-08T18:53:49.972110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"874.926695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2024-10-08T18:53:49.972204Z","caller":"traceutil/trace.go:171","msg":"trace[1465299044] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:396; }","duration":"875.029071ms","start":"2024-10-08T18:53:49.097165Z","end":"2024-10-08T18:53:49.972194Z","steps":["trace[1465299044] 'agreement among raft nodes before linearized reading'  (duration: 874.794355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972270Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.097133Z","time spent":"875.127829ms","remote":"127.0.0.1:57982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":465,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"info","ts":"2024-10-08T18:53:49.972431Z","caller":"traceutil/trace.go:171","msg":"trace[345907204] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"822.082366ms","start":"2024-10-08T18:53:49.150341Z","end":"2024-10-08T18:53:49.972423Z","steps":["trace[345907204] 'process raft request'  (duration: 821.567518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972503Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.150322Z","time spent":"822.149371ms","remote":"127.0.0.1:57700","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":789,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/etcd-pause-078692.17fc8f07061f8dc0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/etcd-pause-078692.17fc8f07061f8dc0\" value_size:708 lease:4943424539903899260 >> failure:<>"}
	{"level":"info","ts":"2024-10-08T18:53:49.972513Z","caller":"traceutil/trace.go:171","msg":"trace[1845851366] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"885.875812ms","start":"2024-10-08T18:53:49.086575Z","end":"2024-10-08T18:53:49.972451Z","steps":["trace[1845851366] 'process raft request'  (duration: 884.668426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"816.620123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T18:53:49.972680Z","caller":"traceutil/trace.go:171","msg":"trace[1934886295] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:396; }","duration":"816.662634ms","start":"2024-10-08T18:53:49.156011Z","end":"2024-10-08T18:53:49.972673Z","steps":["trace[1934886295] 'agreement among raft nodes before linearized reading'  (duration: 816.605567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.155973Z","time spent":"816.723265ms","remote":"127.0.0.1:57624","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-08T18:53:49.972713Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.086563Z","time spent":"886.047395ms","remote":"127.0.0.1:57802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4545,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-078692\" mod_revision:375 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-078692\" value_size:4483 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-078692\" > >"}
	{"level":"warn","ts":"2024-10-08T18:53:49.972858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"869.679922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T18:53:49.972902Z","caller":"traceutil/trace.go:171","msg":"trace[810141164] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:396; }","duration":"869.735876ms","start":"2024-10-08T18:53:49.103159Z","end":"2024-10-08T18:53:49.972895Z","steps":["trace[810141164] 'agreement among raft nodes before linearized reading'  (duration: 869.665166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.103134Z","time spent":"869.784407ms","remote":"127.0.0.1:57972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":28,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"info","ts":"2024-10-08T18:53:56.435104Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-08T18:53:56.435194Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-078692","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.72:2380"],"advertise-client-urls":["https://192.168.61.72:2379"]}
	{"level":"warn","ts":"2024-10-08T18:53:56.435272Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-08T18:53:56.435359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-08T18:53:56.463702Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.72:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-08T18:53:56.464130Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.72:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-08T18:53:56.465362Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d2afe71ba7be449a","current-leader-member-id":"d2afe71ba7be449a"}
	{"level":"info","ts":"2024-10-08T18:53:56.469001Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.72:2380"}
	{"level":"info","ts":"2024-10-08T18:53:56.469190Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.72:2380"}
	{"level":"info","ts":"2024-10-08T18:53:56.469222Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-078692","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.72:2380"],"advertise-client-urls":["https://192.168.61.72:2379"]}
	
	
	==> kernel <==
	 18:54:33 up 2 min,  0 users,  load average: 0.84, 0.34, 0.12
	Linux pause-078692 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2] <==
	W1008 18:54:05.846687       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.859438       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.884337       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.888239       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.888636       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.890018       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.896629       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.954278       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.973914       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.997893       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.034785       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.087916       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.117923       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.126774       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.134405       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.140024       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.146711       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.182939       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.209927       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.211485       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.257117       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.299768       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.302294       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.742444       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.793484       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042] <==
	I1008 18:54:14.955325       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 18:54:14.955901       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 18:54:14.956075       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 18:54:14.956926       1 shared_informer.go:320] Caches are synced for configmaps
	I1008 18:54:14.957785       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1008 18:54:14.959424       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 18:54:14.960341       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1008 18:54:14.960382       1 aggregator.go:171] initial CRD sync complete...
	I1008 18:54:14.960398       1 autoregister_controller.go:144] Starting autoregister controller
	I1008 18:54:14.960402       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 18:54:14.960406       1 cache.go:39] Caches are synced for autoregister controller
	I1008 18:54:14.965084       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1008 18:54:14.995424       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1008 18:54:15.004825       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1008 18:54:15.005137       1 policy_source.go:224] refreshing policies
	E1008 18:54:15.006999       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 18:54:15.037313       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 18:54:15.775356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 18:54:16.734911       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 18:54:16.755445       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 18:54:16.807126       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 18:54:16.843361       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 18:54:16.983669       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 18:54:20.657503       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 18:54:20.659009       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a] <==
	I1008 18:53:52.514778       1 shared_informer.go:320] Caches are synced for TTL
	I1008 18:53:52.515005       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 18:53:52.516304       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-078692"
	I1008 18:53:52.516446       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 18:53:52.518266       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1008 18:53:52.520780       1 shared_informer.go:320] Caches are synced for namespace
	I1008 18:53:52.521021       1 shared_informer.go:320] Caches are synced for daemon sets
	I1008 18:53:52.522421       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1008 18:53:52.522501       1 shared_informer.go:320] Caches are synced for GC
	I1008 18:53:52.527684       1 shared_informer.go:320] Caches are synced for stateful set
	I1008 18:53:52.533882       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1008 18:53:52.536103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.976381ms"
	I1008 18:53:52.538135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="112.416µs"
	I1008 18:53:52.549724       1 shared_informer.go:320] Caches are synced for PV protection
	I1008 18:53:52.564540       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1008 18:53:52.613544       1 shared_informer.go:320] Caches are synced for endpoint
	I1008 18:53:52.671652       1 shared_informer.go:320] Caches are synced for HPA
	I1008 18:53:52.686597       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1008 18:53:52.715133       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1008 18:53:52.725271       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:53:52.728210       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:53:53.170425       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:53:53.201553       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:53:53.201613       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 18:53:56.310165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="190.644µs"
	
	
	==> kube-controller-manager [d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437] <==
	I1008 18:54:18.328312       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1008 18:54:18.327694       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1008 18:54:18.329627       1 shared_informer.go:320] Caches are synced for cronjob
	I1008 18:54:18.329727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1008 18:54:18.329793       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1008 18:54:18.334670       1 shared_informer.go:320] Caches are synced for PVC protection
	I1008 18:54:18.343579       1 shared_informer.go:320] Caches are synced for deployment
	I1008 18:54:18.345754       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1008 18:54:18.350442       1 shared_informer.go:320] Caches are synced for expand
	I1008 18:54:18.373687       1 shared_informer.go:320] Caches are synced for stateful set
	I1008 18:54:18.378236       1 shared_informer.go:320] Caches are synced for ephemeral
	I1008 18:54:18.379304       1 shared_informer.go:320] Caches are synced for crt configmap
	I1008 18:54:18.385146       1 shared_informer.go:320] Caches are synced for persistent volume
	I1008 18:54:18.392381       1 shared_informer.go:320] Caches are synced for attach detach
	I1008 18:54:18.477649       1 shared_informer.go:320] Caches are synced for taint
	I1008 18:54:18.478247       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 18:54:18.478611       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-078692"
	I1008 18:54:18.480307       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 18:54:18.493367       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:54:18.509253       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:54:18.959632       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:54:18.959749       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 18:54:18.963111       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:54:20.822726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="197.625879ms"
	I1008 18:54:20.824484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.022µs"
	
	
	==> kube-proxy [6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 18:53:46.860573       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 18:53:48.259470       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.72"]
	E1008 18:53:48.268209       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 18:53:48.376519       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 18:53:48.376590       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 18:53:48.376623       1 server_linux.go:169] "Using iptables Proxier"
	I1008 18:53:48.379798       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 18:53:48.380484       1 server.go:483] "Version info" version="v1.31.1"
	I1008 18:53:48.380523       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:53:48.382779       1 config.go:199] "Starting service config controller"
	I1008 18:53:48.382832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 18:53:48.382869       1 config.go:105] "Starting endpoint slice config controller"
	I1008 18:53:48.382885       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 18:53:48.383535       1 config.go:328] "Starting node config controller"
	I1008 18:53:48.383573       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 18:53:48.483446       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 18:53:48.483466       1 shared_informer.go:320] Caches are synced for service config
	I1008 18:53:48.483793       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 18:54:16.148279       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 18:54:16.160817       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.72"]
	E1008 18:54:16.160935       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 18:54:16.228015       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 18:54:16.228185       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 18:54:16.228221       1 server_linux.go:169] "Using iptables Proxier"
	I1008 18:54:16.236173       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 18:54:16.236533       1 server.go:483] "Version info" version="v1.31.1"
	I1008 18:54:16.236550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:54:16.240678       1 config.go:199] "Starting service config controller"
	I1008 18:54:16.240724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 18:54:16.240749       1 config.go:105] "Starting endpoint slice config controller"
	I1008 18:54:16.240754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 18:54:16.241636       1 config.go:328] "Starting node config controller"
	I1008 18:54:16.241668       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 18:54:16.341193       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 18:54:16.341290       1 shared_informer.go:320] Caches are synced for service config
	I1008 18:54:16.341729       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6] <==
	W1008 18:54:14.829928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1008 18:54:14.829956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.830001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 18:54:14.830068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.833882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 18:54:14.837119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.834094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 18:54:14.837387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.834206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 18:54:14.837472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 18:54:14.839383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.834287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 18:54:14.839760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.900577       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 18:54:14.900635       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1008 18:54:21.311653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0] <==
	I1008 18:53:46.449931       1 serving.go:386] Generated self-signed cert in-memory
	W1008 18:53:48.172225       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 18:53:48.172322       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 18:53:48.172348       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 18:53:48.172410       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 18:53:48.250931       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1008 18:53:48.250971       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:53:48.255933       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 18:53:48.262545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 18:53:48.262580       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 18:53:48.262599       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 18:53:48.363685       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1008 18:53:56.577441       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 08 18:54:09 pause-078692 kubelet[3154]: W1008 18:54:09.374995    3154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.72:8443: connect: connection refused
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.375131    3154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.72:8443: connect: connection refused" logger="UnhandledError"
	Oct 08 18:54:09 pause-078692 kubelet[3154]: W1008 18:54:09.452183    3154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-078692&limit=500&resourceVersion=0": dial tcp 192.168.61.72:8443: connect: connection refused
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.452261    3154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-078692&limit=500&resourceVersion=0\": dial tcp 192.168.61.72:8443: connect: connection refused" logger="UnhandledError"
	Oct 08 18:54:09 pause-078692 kubelet[3154]: W1008 18:54:09.835746    3154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.61.72:8443: connect: connection refused
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.836012    3154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.72:8443: connect: connection refused" logger="UnhandledError"
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.848820    3154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-078692?timeout=10s\": dial tcp 192.168.61.72:8443: connect: connection refused" interval="1.6s"
	Oct 08 18:54:10 pause-078692 kubelet[3154]: I1008 18:54:10.057346    3154 kubelet_node_status.go:72] "Attempting to register node" node="pause-078692"
	Oct 08 18:54:10 pause-078692 kubelet[3154]: E1008 18:54:10.058813    3154 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.72:8443: connect: connection refused" node="pause-078692"
	Oct 08 18:54:11 pause-078692 kubelet[3154]: I1008 18:54:11.660647    3154 kubelet_node_status.go:72] "Attempting to register node" node="pause-078692"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.114512    3154 kubelet_node_status.go:111] "Node was previously registered" node="pause-078692"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.115139    3154 kubelet_node_status.go:75] "Successfully registered node" node="pause-078692"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.115263    3154 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.116889    3154 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.419854    3154 apiserver.go:52] "Watching apiserver"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.431287    3154 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.479283    3154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/105b25e7-cc00-415b-ac82-81a2bb828d60-lib-modules\") pod \"kube-proxy-q8ntx\" (UID: \"105b25e7-cc00-415b-ac82-81a2bb828d60\") " pod="kube-system/kube-proxy-q8ntx"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.479448    3154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/105b25e7-cc00-415b-ac82-81a2bb828d60-xtables-lock\") pod \"kube-proxy-q8ntx\" (UID: \"105b25e7-cc00-415b-ac82-81a2bb828d60\") " pod="kube-system/kube-proxy-q8ntx"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.724792    3154 scope.go:117] "RemoveContainer" containerID="7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.725664    3154 scope.go:117] "RemoveContainer" containerID="6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb"
	Oct 08 18:54:18 pause-078692 kubelet[3154]: E1008 18:54:18.551633    3154 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413658551213721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:54:18 pause-078692 kubelet[3154]: E1008 18:54:18.551958    3154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413658551213721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:54:20 pause-078692 kubelet[3154]: I1008 18:54:20.265121    3154 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 18:54:28 pause-078692 kubelet[3154]: E1008 18:54:28.555260    3154 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413668554425760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:54:28 pause-078692 kubelet[3154]: E1008 18:54:28.555324    3154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413668554425760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-078692 -n pause-078692
helpers_test.go:261: (dbg) Run:  kubectl --context pause-078692 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-078692 -n pause-078692
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-078692 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-078692 logs -n 25: (1.296092271s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-133603         | test-preload-133603       | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:49 UTC |
	| start   | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:49 UTC | 08 Oct 24 18:50 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:50 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:50 UTC | 08 Oct 24 18:51 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-010854       | scheduled-stop-010854     | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:51 UTC |
	| start   | -p offline-crio-907125         | offline-crio-907125       | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:52 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302431   | kubernetes-upgrade-302431 | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-078692 --memory=2048  | pause-078692              | jenkins | v1.34.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:53 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-204592      | minikube                  | jenkins | v1.26.0 | 08 Oct 24 18:51 UTC | 08 Oct 24 18:53 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-907125         | offline-crio-907125       | jenkins | v1.34.0 | 08 Oct 24 18:52 UTC | 08 Oct 24 18:52 UTC |
	| start   | -p running-upgrade-390529      | minikube                  | jenkins | v1.26.0 | 08 Oct 24 18:52 UTC | 08 Oct 24 18:54 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-078692                | pause-078692              | jenkins | v1.34.0 | 08 Oct 24 18:53 UTC | 08 Oct 24 18:54 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-204592 stop    | minikube                  | jenkins | v1.26.0 | 08 Oct 24 18:53 UTC | 08 Oct 24 18:53 UTC |
	| start   | -p stopped-upgrade-204592      | stopped-upgrade-204592    | jenkins | v1.34.0 | 08 Oct 24 18:53 UTC | 08 Oct 24 18:54 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-390529      | running-upgrade-390529    | jenkins | v1.34.0 | 08 Oct 24 18:54 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-204592      | stopped-upgrade-204592    | jenkins | v1.34.0 | 08 Oct 24 18:54 UTC | 08 Oct 24 18:54 UTC |
	| start   | -p force-systemd-env-193077    | force-systemd-env-193077  | jenkins | v1.34.0 | 08 Oct 24 18:54 UTC |                     |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 18:54:31
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 18:54:31.263755  576068 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:54:31.264011  576068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:54:31.264022  576068 out.go:358] Setting ErrFile to fd 2...
	I1008 18:54:31.264029  576068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:54:31.264246  576068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:54:31.264830  576068 out.go:352] Setting JSON to false
	I1008 18:54:31.265840  576068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9423,"bootTime":1728404248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:54:31.265955  576068 start.go:139] virtualization: kvm guest
	I1008 18:54:31.267936  576068 out.go:177] * [force-systemd-env-193077] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:54:31.269074  576068 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:54:31.269124  576068 notify.go:220] Checking for updates...
	I1008 18:54:31.271028  576068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:54:31.271974  576068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:54:31.272938  576068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:54:31.274000  576068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:54:31.275152  576068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1008 18:54:31.276594  576068 config.go:182] Loaded profile config "kubernetes-upgrade-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 18:54:31.276726  576068 config.go:182] Loaded profile config "pause-078692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:54:31.276811  576068 config.go:182] Loaded profile config "running-upgrade-390529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1008 18:54:31.276891  576068 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:54:31.313998  576068 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 18:54:31.315199  576068 start.go:297] selected driver: kvm2
	I1008 18:54:31.315212  576068 start.go:901] validating driver "kvm2" against <nil>
	I1008 18:54:31.315227  576068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:54:31.315891  576068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:54:31.315954  576068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:54:31.331888  576068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:54:31.331936  576068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:54:31.332247  576068 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 18:54:31.332284  576068 cni.go:84] Creating CNI manager for ""
	I1008 18:54:31.332333  576068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:54:31.332341  576068 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 18:54:31.332415  576068 start.go:340] cluster config:
	{Name:force-systemd-env-193077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-193077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:54:31.332514  576068 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:54:31.334065  576068 out.go:177] * Starting "force-systemd-env-193077" primary control-plane node in "force-systemd-env-193077" cluster
	I1008 18:54:31.335129  576068 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 18:54:31.335160  576068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 18:54:31.335172  576068 cache.go:56] Caching tarball of preloaded images
	I1008 18:54:31.335246  576068 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:54:31.335282  576068 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 18:54:31.335384  576068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/force-systemd-env-193077/config.json ...
	I1008 18:54:31.335406  576068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/force-systemd-env-193077/config.json: {Name:mk6ef5103ac80391c6920ed45224eb9c8fa9a7e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:54:31.335562  576068 start.go:360] acquireMachinesLock for force-systemd-env-193077: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:54:31.335597  576068 start.go:364] duration metric: took 18.56µs to acquireMachinesLock for "force-systemd-env-193077"
	I1008 18:54:31.335620  576068 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-193077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-193077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:54:31.335699  576068 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 18:54:29.909046  573944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 18:54:29.909498  573944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 18:54:29.909516  573944 kubeadm.go:310] 
	I1008 18:54:29.909606  573944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 18:54:29.909690  573944 kubeadm.go:310] 		timed out waiting for the condition
	I1008 18:54:29.909708  573944 kubeadm.go:310] 
	I1008 18:54:29.909788  573944 kubeadm.go:310] 	This error is likely caused by:
	I1008 18:54:29.909861  573944 kubeadm.go:310] 		- The kubelet is not running
	I1008 18:54:29.910093  573944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 18:54:29.910103  573944 kubeadm.go:310] 
	I1008 18:54:29.910392  573944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 18:54:29.910491  573944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 18:54:29.910601  573944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 18:54:29.910634  573944 kubeadm.go:310] 
	I1008 18:54:29.910933  573944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 18:54:29.911623  573944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 18:54:29.911651  573944 kubeadm.go:310] 
	I1008 18:54:29.911884  573944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 18:54:29.912091  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 18:54:29.912295  573944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 18:54:29.912495  573944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 18:54:29.912522  573944 kubeadm.go:310] 
	I1008 18:54:29.912919  573944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 18:54:29.913448  573944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 18:54:29.913573  573944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 18:54:29.913704  573944 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302431 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 18:54:29.913760  573944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 18:54:30.906102  573944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:54:30.923525  573944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:54:30.933788  573944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:54:30.933814  573944 kubeadm.go:157] found existing configuration files:
	
	I1008 18:54:30.933869  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:54:30.943454  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:54:30.943516  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:54:30.954492  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:54:30.963676  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:54:30.963717  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:54:30.973366  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:54:30.983733  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:54:30.983792  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:54:30.994523  573944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:54:31.004856  573944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:54:31.004900  573944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:54:31.015494  573944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 18:54:31.085548  573944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 18:54:31.085627  573944 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 18:54:31.230446  573944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 18:54:31.230582  573944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 18:54:31.230723  573944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 18:54:31.442163  573944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 18:54:29.975568  575275 pod_ready.go:93] pod "kube-apiserver-pause-078692" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:29.975600  575275 pod_ready.go:82] duration metric: took 400.072091ms for pod "kube-apiserver-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:29.975614  575275 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.375152  575275 pod_ready.go:93] pod "kube-controller-manager-pause-078692" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:30.375181  575275 pod_ready.go:82] duration metric: took 399.557558ms for pod "kube-controller-manager-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.375196  575275 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q8ntx" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.775303  575275 pod_ready.go:93] pod "kube-proxy-q8ntx" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:30.775329  575275 pod_ready.go:82] duration metric: took 400.125521ms for pod "kube-proxy-q8ntx" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:30.775339  575275 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:31.175511  575275 pod_ready.go:93] pod "kube-scheduler-pause-078692" in "kube-system" namespace has status "Ready":"True"
	I1008 18:54:31.175548  575275 pod_ready.go:82] duration metric: took 400.201452ms for pod "kube-scheduler-pause-078692" in "kube-system" namespace to be "Ready" ...
	I1008 18:54:31.175559  575275 pod_ready.go:39] duration metric: took 1.759750066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 18:54:31.175577  575275 api_server.go:52] waiting for apiserver process to appear ...
	I1008 18:54:31.175641  575275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:54:31.189536  575275 api_server.go:72] duration metric: took 1.981986704s to wait for apiserver process to appear ...
	I1008 18:54:31.189569  575275 api_server.go:88] waiting for apiserver healthz status ...
	I1008 18:54:31.189615  575275 api_server.go:253] Checking apiserver healthz at https://192.168.61.72:8443/healthz ...
	I1008 18:54:31.194850  575275 api_server.go:279] https://192.168.61.72:8443/healthz returned 200:
	ok
	I1008 18:54:31.196182  575275 api_server.go:141] control plane version: v1.31.1
	I1008 18:54:31.196212  575275 api_server.go:131] duration metric: took 6.633398ms to wait for apiserver health ...
	I1008 18:54:31.196222  575275 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 18:54:31.379345  575275 system_pods.go:59] 6 kube-system pods found
	I1008 18:54:31.379382  575275 system_pods.go:61] "coredns-7c65d6cfc9-bzh6z" [5cf0a7a2-70e9-4f34-97d8-3c51d466b442] Running
	I1008 18:54:31.379389  575275 system_pods.go:61] "etcd-pause-078692" [0ec80229-076c-49de-bd5b-9243672f1d09] Running
	I1008 18:54:31.379395  575275 system_pods.go:61] "kube-apiserver-pause-078692" [b678a14a-1ab4-4c38-ac05-d6f14696e296] Running
	I1008 18:54:31.379401  575275 system_pods.go:61] "kube-controller-manager-pause-078692" [facd9af0-dee4-4e0b-8bdd-99fb64357042] Running
	I1008 18:54:31.379406  575275 system_pods.go:61] "kube-proxy-q8ntx" [105b25e7-cc00-415b-ac82-81a2bb828d60] Running
	I1008 18:54:31.379410  575275 system_pods.go:61] "kube-scheduler-pause-078692" [a400f672-e859-43d8-8eae-a1acb5ae9166] Running
	I1008 18:54:31.379418  575275 system_pods.go:74] duration metric: took 183.188485ms to wait for pod list to return data ...
	I1008 18:54:31.379428  575275 default_sa.go:34] waiting for default service account to be created ...
	I1008 18:54:31.575062  575275 default_sa.go:45] found service account: "default"
	I1008 18:54:31.575096  575275 default_sa.go:55] duration metric: took 195.660169ms for default service account to be created ...
	I1008 18:54:31.575108  575275 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 18:54:31.776609  575275 system_pods.go:86] 6 kube-system pods found
	I1008 18:54:31.776638  575275 system_pods.go:89] "coredns-7c65d6cfc9-bzh6z" [5cf0a7a2-70e9-4f34-97d8-3c51d466b442] Running
	I1008 18:54:31.776647  575275 system_pods.go:89] "etcd-pause-078692" [0ec80229-076c-49de-bd5b-9243672f1d09] Running
	I1008 18:54:31.776657  575275 system_pods.go:89] "kube-apiserver-pause-078692" [b678a14a-1ab4-4c38-ac05-d6f14696e296] Running
	I1008 18:54:31.776661  575275 system_pods.go:89] "kube-controller-manager-pause-078692" [facd9af0-dee4-4e0b-8bdd-99fb64357042] Running
	I1008 18:54:31.776664  575275 system_pods.go:89] "kube-proxy-q8ntx" [105b25e7-cc00-415b-ac82-81a2bb828d60] Running
	I1008 18:54:31.776667  575275 system_pods.go:89] "kube-scheduler-pause-078692" [a400f672-e859-43d8-8eae-a1acb5ae9166] Running
	I1008 18:54:31.776674  575275 system_pods.go:126] duration metric: took 201.560624ms to wait for k8s-apps to be running ...
	I1008 18:54:31.776682  575275 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 18:54:31.776740  575275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:54:31.795037  575275 system_svc.go:56] duration metric: took 18.343413ms WaitForService to wait for kubelet
	I1008 18:54:31.795070  575275 kubeadm.go:582] duration metric: took 2.587526191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:54:31.795093  575275 node_conditions.go:102] verifying NodePressure condition ...
	I1008 18:54:31.975260  575275 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 18:54:31.975290  575275 node_conditions.go:123] node cpu capacity is 2
	I1008 18:54:31.975306  575275 node_conditions.go:105] duration metric: took 180.20631ms to run NodePressure ...
	I1008 18:54:31.975320  575275 start.go:241] waiting for startup goroutines ...
	I1008 18:54:31.975331  575275 start.go:246] waiting for cluster config update ...
	I1008 18:54:31.975341  575275 start.go:255] writing updated cluster config ...
	I1008 18:54:31.975730  575275 ssh_runner.go:195] Run: rm -f paused
	I1008 18:54:32.036000  575275 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 18:54:32.037692  575275 out.go:177] * Done! kubectl is now configured to use "pause-078692" cluster and "default" namespace by default
	I1008 18:54:31.445304  573944 out.go:235]   - Generating certificates and keys ...
	I1008 18:54:31.445410  573944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 18:54:31.445559  573944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 18:54:31.445689  573944 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 18:54:31.445791  573944 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 18:54:31.445890  573944 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 18:54:31.445978  573944 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 18:54:31.446069  573944 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 18:54:31.446157  573944 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 18:54:31.446265  573944 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 18:54:31.446433  573944 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 18:54:31.446488  573944 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 18:54:31.446578  573944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 18:54:31.528192  573944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 18:54:31.667896  573944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 18:54:31.934272  573944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 18:54:32.103890  573944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 18:54:32.121309  573944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 18:54:32.122596  573944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 18:54:32.122710  573944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 18:54:32.296864  573944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 18:54:32.298805  573944 out.go:235]   - Booting up control plane ...
	I1008 18:54:32.298931  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 18:54:32.314714  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 18:54:32.316162  573944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 18:54:32.317163  573944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 18:54:32.320028  573944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 18:54:29.511943  575770 api_server.go:279] https://192.168.72.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 18:54:29.511975  575770 api_server.go:103] status: https://192.168.72.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 18:54:29.511993  575770 api_server.go:253] Checking apiserver healthz at https://192.168.72.186:8443/healthz ...
	I1008 18:54:31.518731  575770 api_server.go:279] https://192.168.72.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 18:54:31.518788  575770 api_server.go:103] status: https://192.168.72.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 18:54:31.518806  575770 api_server.go:253] Checking apiserver healthz at https://192.168.72.186:8443/healthz ...
	I1008 18:54:33.524966  575770 api_server.go:279] https://192.168.72.186:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 18:54:33.525001  575770 api_server.go:103] status: https://192.168.72.186:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 18:54:33.525020  575770 api_server.go:253] Checking apiserver healthz at https://192.168.72.186:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.722086204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413674722011329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c0654dd-6db9-46e8-b082-34784775293c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.722563123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e4e9427-1df4-42ec-98d7-b2b6544892f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.722630485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e4e9427-1df4-42ec-98d7-b2b6544892f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.722871230Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e4e9427-1df4-42ec-98d7-b2b6544892f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.768157811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cb89b5d-2424-4184-987e-a7606343063b name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.768261749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cb89b5d-2424-4184-987e-a7606343063b name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.769933444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e37657e-1dcb-4495-b990-84bf6c216989 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.770633878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413674770608043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e37657e-1dcb-4495-b990-84bf6c216989 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.771215973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4eed0bbe-141c-4810-a806-0d3cf6c35798 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.771265164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4eed0bbe-141c-4810-a806-0d3cf6c35798 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.771546130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4eed0bbe-141c-4810-a806-0d3cf6c35798 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.815584353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd096d9b-5baa-45b1-a205-cf4dc1393693 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.815675122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd096d9b-5baa-45b1-a205-cf4dc1393693 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.817007563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efe1a9fe-c348-4ebe-bd25-038835d1f4f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.817576220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413674817547902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efe1a9fe-c348-4ebe-bd25-038835d1f4f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.818141346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e4f9b4f-0545-475b-8154-450c8fafa3ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.818191356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e4f9b4f-0545-475b-8154-450c8fafa3ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.818434290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e4f9b4f-0545-475b-8154-450c8fafa3ff name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.860561196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c3669a9-165f-4b97-b490-ab685aea5131 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.860633250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c3669a9-165f-4b97-b490-ab685aea5131 name=/runtime.v1.RuntimeService/Version
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.861474353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e61b6db-7fea-4b4c-8922-b13ad897878b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.861846692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413674861821920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e61b6db-7fea-4b4c-8922-b13ad897878b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.862452574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d99f5a51-f044-4905-ad7d-a20637fdc227 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.862514254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d99f5a51-f044-4905-ad7d-a20637fdc227 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 18:54:34 pause-078692 crio[2080]: time="2024-10-08 18:54:34.862779164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728413655767561450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728413655772090749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728413648907120365,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728413648911211060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b
8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728413648890407800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728413646751854344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074,PodSandboxId:90cc09d6d4f1bdc9c6a7e87fe9c2a62e0cdbc12a44bfadfbdab977ac9098f24b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728413625683525666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bzh6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf0a7a2-70e9-4f34-97d8-3c51d466b442,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb,PodSandboxId:935e6c908c21945c05f40e8192d1a2257382e2ecf27eda3019673ccfb63baf36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728413624853420849,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-q8ntx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105b25e7-cc00-415b-ac82-81a2bb828d60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89,PodSandboxId:c0fb2ce712a298c698b8fc6aa27ea3ff7826351a543fbd7e60ee26ac5f402ee1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728413624656138677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-078692,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 13aeb4dfc011f3f7eef4b5bcf7cc57b8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a,PodSandboxId:c551edaa1d01d47850fbb0a43dd3e38766b99c0401d6b20b3c8b7e09203279d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728413624488778528,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-078692,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 73c67063d8e3919ae9130b2dffd13eb9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0,PodSandboxId:048ed81d48f04ea91be0ff1b8d3cdad24f690b93c36da90608e8ca81adf75fb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728413624528920148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-078692,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: d72dd44e2b453b2aa6a289d8a4041f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2,PodSandboxId:2ff7d04ca1b71b398bf784c213c5a009468c8c56254b72e6b3f7322ca85a6123,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728413624365930018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-078692,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a83ac283562eab84c6e47862d402d8e6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d99f5a51-f044-4905-ad7d-a20637fdc227 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e592741c5c23       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   90cc09d6d4f1b       coredns-7c65d6cfc9-bzh6z
	edadd50d5d743       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   935e6c908c219       kube-proxy-q8ntx
	45556f542fd5a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago      Running             etcd                      2                   c0fb2ce712a29       etcd-pause-078692
	d031aac2b9705       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   26 seconds ago      Running             kube-controller-manager   2                   c551edaa1d01d       kube-controller-manager-pause-078692
	bb7c327ca82c7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   26 seconds ago      Running             kube-apiserver            2                   2ff7d04ca1b71       kube-apiserver-pause-078692
	a259e92e95df9       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   28 seconds ago      Running             kube-scheduler            2                   048ed81d48f04       kube-scheduler-pause-078692
	7575b2d44fb69       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   49 seconds ago      Exited              coredns                   1                   90cc09d6d4f1b       coredns-7c65d6cfc9-bzh6z
	6bea03680d9de       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   50 seconds ago      Exited              kube-proxy                1                   935e6c908c219       kube-proxy-q8ntx
	4f4ad5c332bb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   50 seconds ago      Exited              etcd                      1                   c0fb2ce712a29       etcd-pause-078692
	b0c22f3d101a3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   50 seconds ago      Exited              kube-scheduler            1                   048ed81d48f04       kube-scheduler-pause-078692
	9b7138f6dbdfb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   50 seconds ago      Exited              kube-controller-manager   1                   c551edaa1d01d       kube-controller-manager-pause-078692
	88f40d58d918e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   50 seconds ago      Exited              kube-apiserver            1                   2ff7d04ca1b71       kube-apiserver-pause-078692
	
	
	==> coredns [7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54746 - 25435 "HINFO IN 2923528172057456216.7447597708856327474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012869661s
	
	
	==> coredns [9e592741c5c23b22c7aa41a4c73e097eeb511731476533add552e67709cf3be3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33521 - 61906 "HINFO IN 6723085254346993799.2029043226398599707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034062921s
	
	
	==> describe nodes <==
	Name:               pause-078692
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-078692
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=pause-078692
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T18_53_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:53:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-078692
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:54:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:54:15 +0000   Tue, 08 Oct 2024 18:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.72
	  Hostname:    pause-078692
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e51371a6695400eacd206724c383d43
	  System UUID:                8e51371a-6695-400e-acd2-06724c383d43
	  Boot ID:                    51c1f199-1e84-440a-95fb-00abf7116444
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bzh6z                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-pause-078692                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         91s
	  kube-system                 kube-apiserver-pause-078692             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-pause-078692    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-q8ntx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-pause-078692             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 46s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node pause-078692 status is now: NodeHasSufficientMemory
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node pause-078692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node pause-078692 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node pause-078692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node pause-078692 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     90s                kubelet          Node pause-078692 status is now: NodeHasSufficientPID
	  Normal  NodeReady                89s                kubelet          Node pause-078692 status is now: NodeReady
	  Normal  RegisteredNode           86s                node-controller  Node pause-078692 event: Registered Node pause-078692 in Controller
	  Normal  CIDRAssignmentFailed     86s                cidrAllocator    Node pause-078692 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           43s                node-controller  Node pause-078692 event: Registered Node pause-078692 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-078692 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-078692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-078692 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-078692 event: Registered Node pause-078692 in Controller
	
	
	==> dmesg <==
	[  +9.771985] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.066744] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050613] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.198747] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.116754] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.284651] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.925483] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.662405] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
	[  +0.067683] kauditd_printk_skb: 158 callbacks suppressed
	[Oct 8 18:53] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.081451] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.786606] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +0.887867] kauditd_printk_skb: 49 callbacks suppressed
	[ +10.571972] kauditd_printk_skb: 47 callbacks suppressed
	[ +21.747734] systemd-fstab-generator[2004]: Ignoring "noauto" option for root device
	[  +0.173336] systemd-fstab-generator[2016]: Ignoring "noauto" option for root device
	[  +0.183152] systemd-fstab-generator[2030]: Ignoring "noauto" option for root device
	[  +0.153809] systemd-fstab-generator[2042]: Ignoring "noauto" option for root device
	[  +0.343600] systemd-fstab-generator[2071]: Ignoring "noauto" option for root device
	[  +1.641427] systemd-fstab-generator[2593]: Ignoring "noauto" option for root device
	[  +3.360224] kauditd_printk_skb: 195 callbacks suppressed
	[Oct 8 18:54] systemd-fstab-generator[3147]: Ignoring "noauto" option for root device
	[  +7.684300] kauditd_printk_skb: 39 callbacks suppressed
	[ +13.392475] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[  +0.094455] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [45556f542fd5a5626a35fb36d9610e2e3593eeb9c9c9b057e8038ce50fa23e77] <==
	{"level":"info","ts":"2024-10-08T18:54:13.214814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-08T18:54:13.214938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-08T18:54:13.214982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a received MsgPreVoteResp from d2afe71ba7be449a at term 3"}
	{"level":"info","ts":"2024-10-08T18:54:13.215019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a became candidate at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.215108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a received MsgVoteResp from d2afe71ba7be449a at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.215145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d2afe71ba7be449a became leader at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.215189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d2afe71ba7be449a elected leader d2afe71ba7be449a at term 4"}
	{"level":"info","ts":"2024-10-08T18:54:13.218452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:54:13.219173Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d2afe71ba7be449a","local-member-attributes":"{Name:pause-078692 ClientURLs:[https://192.168.61.72:2379]}","request-path":"/0/members/d2afe71ba7be449a/attributes","cluster-id":"2c86054c8ae24e65","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T18:54:13.219615Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:54:13.219890Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T18:54:13.219921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T18:54:13.220374Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T18:54:13.220436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T18:54:13.221340Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T18:54:13.221908Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.72:2379"}
	{"level":"info","ts":"2024-10-08T18:54:15.275582Z","caller":"traceutil/trace.go:171","msg":"trace[1292991759] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"149.846712ms","start":"2024-10-08T18:54:15.125704Z","end":"2024-10-08T18:54:15.275551Z","steps":["trace[1292991759] 'process raft request'  (duration: 147.495254ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:16.969743Z","caller":"traceutil/trace.go:171","msg":"trace[1643727734] transaction","detail":"{read_only:false; number_of_response:0; response_revision:477; }","duration":"125.82241ms","start":"2024-10-08T18:54:16.843903Z","end":"2024-10-08T18:54:16.969726Z","steps":["trace[1643727734] 'process raft request'  (duration: 83.57925ms)","trace[1643727734] 'compare'  (duration: 41.936844ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-08T18:54:20.619725Z","caller":"traceutil/trace.go:171","msg":"trace[139780329] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"342.423944ms","start":"2024-10-08T18:54:20.277279Z","end":"2024-10-08T18:54:20.619703Z","steps":["trace[139780329] 'process raft request'  (duration: 342.119745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:54:20.620359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:54:20.277258Z","time spent":"342.617494ms","remote":"127.0.0.1:36960","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4954,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-bzh6z\" mod_revision:474 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-bzh6z\" value_size:4895 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-bzh6z\" > >"}
	{"level":"warn","ts":"2024-10-08T18:54:20.813524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.105684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-078692\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-10-08T18:54:20.813620Z","caller":"traceutil/trace.go:171","msg":"trace[216441678] range","detail":"{range_begin:/registry/minions/pause-078692; range_end:; response_count:1; response_revision:523; }","duration":"160.217961ms","start":"2024-10-08T18:54:20.653388Z","end":"2024-10-08T18:54:20.813606Z","steps":["trace[216441678] 'range keys from in-memory index tree'  (duration: 160.033059ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:20.814014Z","caller":"traceutil/trace.go:171","msg":"trace[1939006368] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"154.619563ms","start":"2024-10-08T18:54:20.659385Z","end":"2024-10-08T18:54:20.814005Z","steps":["trace[1939006368] 'process raft request'  (duration: 151.640208ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:20.814784Z","caller":"traceutil/trace.go:171","msg":"trace[2133218642] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"154.008386ms","start":"2024-10-08T18:54:20.660765Z","end":"2024-10-08T18:54:20.814773Z","steps":["trace[2133218642] 'process raft request'  (duration: 153.970229ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T18:54:20.815209Z","caller":"traceutil/trace.go:171","msg":"trace[1822846197] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"155.588006ms","start":"2024-10-08T18:54:20.659610Z","end":"2024-10-08T18:54:20.815198Z","steps":["trace[1822846197] 'process raft request'  (duration: 155.063077ms)"],"step_count":1}
	
	
	==> etcd [4f4ad5c332bb34f9c087268302e2b6b3477aae13e181ef090dfee48331e2ab89] <==
	{"level":"info","ts":"2024-10-08T18:53:49.971823Z","caller":"traceutil/trace.go:171","msg":"trace[1995130158] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:395; }","duration":"893.518644ms","start":"2024-10-08T18:53:49.078293Z","end":"2024-10-08T18:53:49.971812Z","steps":["trace[1995130158] 'agreement among raft nodes before linearized reading'  (duration: 893.219254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.971944Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.078269Z","time spent":"893.666138ms","remote":"127.0.0.1:57780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":444,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"warn","ts":"2024-10-08T18:53:49.972110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"874.926695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2024-10-08T18:53:49.972204Z","caller":"traceutil/trace.go:171","msg":"trace[1465299044] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:396; }","duration":"875.029071ms","start":"2024-10-08T18:53:49.097165Z","end":"2024-10-08T18:53:49.972194Z","steps":["trace[1465299044] 'agreement among raft nodes before linearized reading'  (duration: 874.794355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972270Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.097133Z","time spent":"875.127829ms","remote":"127.0.0.1:57982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":465,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"info","ts":"2024-10-08T18:53:49.972431Z","caller":"traceutil/trace.go:171","msg":"trace[345907204] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"822.082366ms","start":"2024-10-08T18:53:49.150341Z","end":"2024-10-08T18:53:49.972423Z","steps":["trace[345907204] 'process raft request'  (duration: 821.567518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972503Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.150322Z","time spent":"822.149371ms","remote":"127.0.0.1:57700","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":789,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/etcd-pause-078692.17fc8f07061f8dc0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/etcd-pause-078692.17fc8f07061f8dc0\" value_size:708 lease:4943424539903899260 >> failure:<>"}
	{"level":"info","ts":"2024-10-08T18:53:49.972513Z","caller":"traceutil/trace.go:171","msg":"trace[1845851366] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"885.875812ms","start":"2024-10-08T18:53:49.086575Z","end":"2024-10-08T18:53:49.972451Z","steps":["trace[1845851366] 'process raft request'  (duration: 884.668426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"816.620123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T18:53:49.972680Z","caller":"traceutil/trace.go:171","msg":"trace[1934886295] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:396; }","duration":"816.662634ms","start":"2024-10-08T18:53:49.156011Z","end":"2024-10-08T18:53:49.972673Z","steps":["trace[1934886295] 'agreement among raft nodes before linearized reading'  (duration: 816.605567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.155973Z","time spent":"816.723265ms","remote":"127.0.0.1:57624","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-08T18:53:49.972713Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.086563Z","time spent":"886.047395ms","remote":"127.0.0.1:57802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4545,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-078692\" mod_revision:375 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-078692\" value_size:4483 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-078692\" > >"}
	{"level":"warn","ts":"2024-10-08T18:53:49.972858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"869.679922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T18:53:49.972902Z","caller":"traceutil/trace.go:171","msg":"trace[810141164] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:396; }","duration":"869.735876ms","start":"2024-10-08T18:53:49.103159Z","end":"2024-10-08T18:53:49.972895Z","steps":["trace[810141164] 'agreement among raft nodes before linearized reading'  (duration: 869.665166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T18:53:49.972927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T18:53:49.103134Z","time spent":"869.784407ms","remote":"127.0.0.1:57972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":28,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"info","ts":"2024-10-08T18:53:56.435104Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-08T18:53:56.435194Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-078692","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.72:2380"],"advertise-client-urls":["https://192.168.61.72:2379"]}
	{"level":"warn","ts":"2024-10-08T18:53:56.435272Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-08T18:53:56.435359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-08T18:53:56.463702Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.72:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-08T18:53:56.464130Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.72:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-08T18:53:56.465362Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d2afe71ba7be449a","current-leader-member-id":"d2afe71ba7be449a"}
	{"level":"info","ts":"2024-10-08T18:53:56.469001Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.72:2380"}
	{"level":"info","ts":"2024-10-08T18:53:56.469190Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.72:2380"}
	{"level":"info","ts":"2024-10-08T18:53:56.469222Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-078692","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.72:2380"],"advertise-client-urls":["https://192.168.61.72:2379"]}
	
	
	==> kernel <==
	 18:54:35 up 2 min,  0 users,  load average: 0.86, 0.35, 0.13
	Linux pause-078692 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [88f40d58d918ef30eae4d7ffae86bcec6f988e6a54cb1eb2c9013f312ce0a5f2] <==
	W1008 18:54:05.846687       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.859438       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.884337       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.888239       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.888636       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.890018       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.896629       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.954278       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.973914       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:05.997893       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.034785       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.087916       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.117923       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.126774       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.134405       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.140024       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.146711       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.182939       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.209927       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.211485       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.257117       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.299768       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.302294       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.742444       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 18:54:06.793484       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bb7c327ca82c7afc82a2e2bd41c45c619fb102bafa90ea19227c4e946e688042] <==
	I1008 18:54:14.955325       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 18:54:14.955901       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 18:54:14.956075       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 18:54:14.956926       1 shared_informer.go:320] Caches are synced for configmaps
	I1008 18:54:14.957785       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1008 18:54:14.959424       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 18:54:14.960341       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1008 18:54:14.960382       1 aggregator.go:171] initial CRD sync complete...
	I1008 18:54:14.960398       1 autoregister_controller.go:144] Starting autoregister controller
	I1008 18:54:14.960402       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 18:54:14.960406       1 cache.go:39] Caches are synced for autoregister controller
	I1008 18:54:14.965084       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1008 18:54:14.995424       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1008 18:54:15.004825       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1008 18:54:15.005137       1 policy_source.go:224] refreshing policies
	E1008 18:54:15.006999       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 18:54:15.037313       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 18:54:15.775356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 18:54:16.734911       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 18:54:16.755445       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 18:54:16.807126       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 18:54:16.843361       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 18:54:16.983669       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 18:54:20.657503       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 18:54:20.659009       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9b7138f6dbdfb132e5e0e9d9241920013755f0dce1f5a659ced35b2e2040cc7a] <==
	I1008 18:53:52.514778       1 shared_informer.go:320] Caches are synced for TTL
	I1008 18:53:52.515005       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 18:53:52.516304       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-078692"
	I1008 18:53:52.516446       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 18:53:52.518266       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1008 18:53:52.520780       1 shared_informer.go:320] Caches are synced for namespace
	I1008 18:53:52.521021       1 shared_informer.go:320] Caches are synced for daemon sets
	I1008 18:53:52.522421       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1008 18:53:52.522501       1 shared_informer.go:320] Caches are synced for GC
	I1008 18:53:52.527684       1 shared_informer.go:320] Caches are synced for stateful set
	I1008 18:53:52.533882       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1008 18:53:52.536103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.976381ms"
	I1008 18:53:52.538135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="112.416µs"
	I1008 18:53:52.549724       1 shared_informer.go:320] Caches are synced for PV protection
	I1008 18:53:52.564540       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1008 18:53:52.613544       1 shared_informer.go:320] Caches are synced for endpoint
	I1008 18:53:52.671652       1 shared_informer.go:320] Caches are synced for HPA
	I1008 18:53:52.686597       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1008 18:53:52.715133       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1008 18:53:52.725271       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:53:52.728210       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:53:53.170425       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:53:53.201553       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:53:53.201613       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 18:53:56.310165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="190.644µs"
	
	
	==> kube-controller-manager [d031aac2b9705dd3500bcc495c02bf5cbb945f37df659b8ef3583855da682437] <==
	I1008 18:54:18.328312       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1008 18:54:18.327694       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1008 18:54:18.329627       1 shared_informer.go:320] Caches are synced for cronjob
	I1008 18:54:18.329727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1008 18:54:18.329793       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1008 18:54:18.334670       1 shared_informer.go:320] Caches are synced for PVC protection
	I1008 18:54:18.343579       1 shared_informer.go:320] Caches are synced for deployment
	I1008 18:54:18.345754       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1008 18:54:18.350442       1 shared_informer.go:320] Caches are synced for expand
	I1008 18:54:18.373687       1 shared_informer.go:320] Caches are synced for stateful set
	I1008 18:54:18.378236       1 shared_informer.go:320] Caches are synced for ephemeral
	I1008 18:54:18.379304       1 shared_informer.go:320] Caches are synced for crt configmap
	I1008 18:54:18.385146       1 shared_informer.go:320] Caches are synced for persistent volume
	I1008 18:54:18.392381       1 shared_informer.go:320] Caches are synced for attach detach
	I1008 18:54:18.477649       1 shared_informer.go:320] Caches are synced for taint
	I1008 18:54:18.478247       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 18:54:18.478611       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-078692"
	I1008 18:54:18.480307       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 18:54:18.493367       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:54:18.509253       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 18:54:18.959632       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:54:18.959749       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 18:54:18.963111       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 18:54:20.822726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="197.625879ms"
	I1008 18:54:20.824484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.022µs"
	
	
	==> kube-proxy [6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 18:53:46.860573       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 18:53:48.259470       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.72"]
	E1008 18:53:48.268209       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 18:53:48.376519       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 18:53:48.376590       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 18:53:48.376623       1 server_linux.go:169] "Using iptables Proxier"
	I1008 18:53:48.379798       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 18:53:48.380484       1 server.go:483] "Version info" version="v1.31.1"
	I1008 18:53:48.380523       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:53:48.382779       1 config.go:199] "Starting service config controller"
	I1008 18:53:48.382832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 18:53:48.382869       1 config.go:105] "Starting endpoint slice config controller"
	I1008 18:53:48.382885       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 18:53:48.383535       1 config.go:328] "Starting node config controller"
	I1008 18:53:48.383573       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 18:53:48.483446       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 18:53:48.483466       1 shared_informer.go:320] Caches are synced for service config
	I1008 18:53:48.483793       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [edadd50d5d743dc06fb341a51312b6cbbf504ee6e73b53659a11d3dfcedb35b0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 18:54:16.148279       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 18:54:16.160817       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.72"]
	E1008 18:54:16.160935       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 18:54:16.228015       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 18:54:16.228185       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 18:54:16.228221       1 server_linux.go:169] "Using iptables Proxier"
	I1008 18:54:16.236173       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 18:54:16.236533       1 server.go:483] "Version info" version="v1.31.1"
	I1008 18:54:16.236550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:54:16.240678       1 config.go:199] "Starting service config controller"
	I1008 18:54:16.240724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 18:54:16.240749       1 config.go:105] "Starting endpoint slice config controller"
	I1008 18:54:16.240754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 18:54:16.241636       1 config.go:328] "Starting node config controller"
	I1008 18:54:16.241668       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 18:54:16.341193       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 18:54:16.341290       1 shared_informer.go:320] Caches are synced for service config
	I1008 18:54:16.341729       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a259e92e95df9ee920e7d05752f920e482d2c7f8ba16cdc066b3359f3c3f37d6] <==
	W1008 18:54:14.829928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1008 18:54:14.829956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.830001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 18:54:14.830068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.833882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 18:54:14.837119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.834094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 18:54:14.837387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.834206       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 18:54:14.837472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 18:54:14.839383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.839646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1008 18:54:14.839688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.834287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 18:54:14.839760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 18:54:14.900577       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 18:54:14.900635       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1008 18:54:21.311653       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b0c22f3d101a3f9e329c283044b8f724e9fdd8d12cd60a22e1511bbdf66e83f0] <==
	I1008 18:53:46.449931       1 serving.go:386] Generated self-signed cert in-memory
	W1008 18:53:48.172225       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 18:53:48.172322       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 18:53:48.172348       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 18:53:48.172410       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 18:53:48.250931       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1008 18:53:48.250971       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:53:48.255933       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 18:53:48.262545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 18:53:48.262580       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 18:53:48.262599       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 18:53:48.363685       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1008 18:53:56.577441       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 08 18:54:09 pause-078692 kubelet[3154]: W1008 18:54:09.374995    3154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.72:8443: connect: connection refused
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.375131    3154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.72:8443: connect: connection refused" logger="UnhandledError"
	Oct 08 18:54:09 pause-078692 kubelet[3154]: W1008 18:54:09.452183    3154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-078692&limit=500&resourceVersion=0": dial tcp 192.168.61.72:8443: connect: connection refused
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.452261    3154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-078692&limit=500&resourceVersion=0\": dial tcp 192.168.61.72:8443: connect: connection refused" logger="UnhandledError"
	Oct 08 18:54:09 pause-078692 kubelet[3154]: W1008 18:54:09.835746    3154 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.61.72:8443: connect: connection refused
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.836012    3154 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.72:8443: connect: connection refused" logger="UnhandledError"
	Oct 08 18:54:09 pause-078692 kubelet[3154]: E1008 18:54:09.848820    3154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-078692?timeout=10s\": dial tcp 192.168.61.72:8443: connect: connection refused" interval="1.6s"
	Oct 08 18:54:10 pause-078692 kubelet[3154]: I1008 18:54:10.057346    3154 kubelet_node_status.go:72] "Attempting to register node" node="pause-078692"
	Oct 08 18:54:10 pause-078692 kubelet[3154]: E1008 18:54:10.058813    3154 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.72:8443: connect: connection refused" node="pause-078692"
	Oct 08 18:54:11 pause-078692 kubelet[3154]: I1008 18:54:11.660647    3154 kubelet_node_status.go:72] "Attempting to register node" node="pause-078692"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.114512    3154 kubelet_node_status.go:111] "Node was previously registered" node="pause-078692"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.115139    3154 kubelet_node_status.go:75] "Successfully registered node" node="pause-078692"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.115263    3154 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.116889    3154 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.419854    3154 apiserver.go:52] "Watching apiserver"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.431287    3154 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.479283    3154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/105b25e7-cc00-415b-ac82-81a2bb828d60-lib-modules\") pod \"kube-proxy-q8ntx\" (UID: \"105b25e7-cc00-415b-ac82-81a2bb828d60\") " pod="kube-system/kube-proxy-q8ntx"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.479448    3154 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/105b25e7-cc00-415b-ac82-81a2bb828d60-xtables-lock\") pod \"kube-proxy-q8ntx\" (UID: \"105b25e7-cc00-415b-ac82-81a2bb828d60\") " pod="kube-system/kube-proxy-q8ntx"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.724792    3154 scope.go:117] "RemoveContainer" containerID="7575b2d44fb69b98424158aa2124ce832a0a021535aaf008261bd0395b7fb074"
	Oct 08 18:54:15 pause-078692 kubelet[3154]: I1008 18:54:15.725664    3154 scope.go:117] "RemoveContainer" containerID="6bea03680d9de0a73b45f24140a74f0ff8285afb522db46874f4e8cf0a5f9cdb"
	Oct 08 18:54:18 pause-078692 kubelet[3154]: E1008 18:54:18.551633    3154 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413658551213721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:54:18 pause-078692 kubelet[3154]: E1008 18:54:18.551958    3154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413658551213721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:54:20 pause-078692 kubelet[3154]: I1008 18:54:20.265121    3154 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 18:54:28 pause-078692 kubelet[3154]: E1008 18:54:28.555260    3154 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413668554425760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 18:54:28 pause-078692 kubelet[3154]: E1008 18:54:28.555324    3154 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728413668554425760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-078692 -n pause-078692
helpers_test.go:261: (dbg) Run:  kubectl --context pause-078692 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (76.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (275.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-256554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-256554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m35.159913012s)

                                                
                                                
-- stdout --
	* [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:57:56.016191  581665 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:57:56.016422  581665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:57:56.016432  581665 out.go:358] Setting ErrFile to fd 2...
	I1008 18:57:56.016436  581665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:57:56.016618  581665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:57:56.017185  581665 out.go:352] Setting JSON to false
	I1008 18:57:56.018180  581665 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9628,"bootTime":1728404248,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 18:57:56.018286  581665 start.go:139] virtualization: kvm guest
	I1008 18:57:56.020418  581665 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 18:57:56.021613  581665 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 18:57:56.021636  581665 notify.go:220] Checking for updates...
	I1008 18:57:56.023705  581665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 18:57:56.024800  581665 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 18:57:56.025874  581665 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:57:56.027210  581665 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 18:57:56.028393  581665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 18:57:56.029870  581665 config.go:182] Loaded profile config "NoKubernetes-038693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1008 18:57:56.029969  581665 config.go:182] Loaded profile config "cert-expiration-439352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:57:56.030070  581665 config.go:182] Loaded profile config "kubernetes-upgrade-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:57:56.030175  581665 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 18:57:56.065916  581665 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 18:57:56.067028  581665 start.go:297] selected driver: kvm2
	I1008 18:57:56.067042  581665 start.go:901] validating driver "kvm2" against <nil>
	I1008 18:57:56.067060  581665 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 18:57:56.067707  581665 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:57:56.067783  581665 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 18:57:56.083889  581665 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 18:57:56.083934  581665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 18:57:56.084169  581665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 18:57:56.084199  581665 cni.go:84] Creating CNI manager for ""
	I1008 18:57:56.084239  581665 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:57:56.084247  581665 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 18:57:56.084286  581665 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:57:56.084395  581665 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 18:57:56.086656  581665 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 18:57:56.087701  581665 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 18:57:56.087733  581665 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 18:57:56.087742  581665 cache.go:56] Caching tarball of preloaded images
	I1008 18:57:56.087809  581665 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 18:57:56.087819  581665 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 18:57:56.087908  581665 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 18:57:56.087925  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json: {Name:mk07e2daaef6f3fc3f30f760378cee1852696925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:57:56.088051  581665 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 18:58:05.209590  581665 start.go:364] duration metric: took 9.121499376s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 18:58:05.209661  581665 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 18:58:05.209782  581665 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 18:58:05.211548  581665 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 18:58:05.211718  581665 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 18:58:05.211782  581665 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:58:05.229144  581665 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I1008 18:58:05.229684  581665 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:58:05.230351  581665 main.go:141] libmachine: Using API Version  1
	I1008 18:58:05.230378  581665 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:58:05.230822  581665 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:58:05.231079  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 18:58:05.231286  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:05.231481  581665 start.go:159] libmachine.API.Create for "old-k8s-version-256554" (driver="kvm2")
	I1008 18:58:05.231528  581665 client.go:168] LocalClient.Create starting
	I1008 18:58:05.231565  581665 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 18:58:05.231613  581665 main.go:141] libmachine: Decoding PEM data...
	I1008 18:58:05.231637  581665 main.go:141] libmachine: Parsing certificate...
	I1008 18:58:05.231715  581665 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 18:58:05.231748  581665 main.go:141] libmachine: Decoding PEM data...
	I1008 18:58:05.231769  581665 main.go:141] libmachine: Parsing certificate...
	I1008 18:58:05.231797  581665 main.go:141] libmachine: Running pre-create checks...
	I1008 18:58:05.231816  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .PreCreateCheck
	I1008 18:58:05.232280  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 18:58:05.232758  581665 main.go:141] libmachine: Creating machine...
	I1008 18:58:05.232778  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .Create
	I1008 18:58:05.232934  581665 main.go:141] libmachine: (old-k8s-version-256554) Creating KVM machine...
	I1008 18:58:05.234112  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found existing default KVM network
	I1008 18:58:05.235715  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:05.235540  581847 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f5e0}
	I1008 18:58:05.235751  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | created network xml: 
	I1008 18:58:05.235764  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | <network>
	I1008 18:58:05.235780  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |   <name>mk-old-k8s-version-256554</name>
	I1008 18:58:05.235793  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |   <dns enable='no'/>
	I1008 18:58:05.235800  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |   
	I1008 18:58:05.235812  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 18:58:05.235822  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |     <dhcp>
	I1008 18:58:05.235832  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 18:58:05.235841  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |     </dhcp>
	I1008 18:58:05.235850  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |   </ip>
	I1008 18:58:05.235862  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG |   
	I1008 18:58:05.235949  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | </network>
	I1008 18:58:05.235978  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | 
	I1008 18:58:05.240956  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | trying to create private KVM network mk-old-k8s-version-256554 192.168.39.0/24...
	I1008 18:58:05.314080  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | private KVM network mk-old-k8s-version-256554 192.168.39.0/24 created
	I1008 18:58:05.314117  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554 ...
	I1008 18:58:05.314136  581665 main.go:141] libmachine: (old-k8s-version-256554) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 18:58:05.314148  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:05.314034  581847 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:58:05.314348  581665 main.go:141] libmachine: (old-k8s-version-256554) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 18:58:05.586899  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:05.586756  581847 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa...
	I1008 18:58:05.670163  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:05.670031  581847 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/old-k8s-version-256554.rawdisk...
	I1008 18:58:05.670201  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Writing magic tar header
	I1008 18:58:05.670220  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Writing SSH key tar header
	I1008 18:58:05.670233  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:05.670177  581847 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554 ...
	I1008 18:58:05.670731  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554
	I1008 18:58:05.670776  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554 (perms=drwx------)
	I1008 18:58:05.670792  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 18:58:05.670804  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 18:58:05.670817  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 18:58:05.670831  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 18:58:05.670847  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 18:58:05.670857  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 18:58:05.670883  581665 main.go:141] libmachine: (old-k8s-version-256554) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 18:58:05.670895  581665 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 18:58:05.670905  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 18:58:05.670917  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 18:58:05.670930  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home/jenkins
	I1008 18:58:05.670940  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Checking permissions on dir: /home
	I1008 18:58:05.670951  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Skipping /home - not owner
	I1008 18:58:05.672348  581665 main.go:141] libmachine: (old-k8s-version-256554) define libvirt domain using xml: 
	I1008 18:58:05.672382  581665 main.go:141] libmachine: (old-k8s-version-256554) <domain type='kvm'>
	I1008 18:58:05.672393  581665 main.go:141] libmachine: (old-k8s-version-256554)   <name>old-k8s-version-256554</name>
	I1008 18:58:05.672400  581665 main.go:141] libmachine: (old-k8s-version-256554)   <memory unit='MiB'>2200</memory>
	I1008 18:58:05.672410  581665 main.go:141] libmachine: (old-k8s-version-256554)   <vcpu>2</vcpu>
	I1008 18:58:05.672421  581665 main.go:141] libmachine: (old-k8s-version-256554)   <features>
	I1008 18:58:05.672430  581665 main.go:141] libmachine: (old-k8s-version-256554)     <acpi/>
	I1008 18:58:05.672440  581665 main.go:141] libmachine: (old-k8s-version-256554)     <apic/>
	I1008 18:58:05.672450  581665 main.go:141] libmachine: (old-k8s-version-256554)     <pae/>
	I1008 18:58:05.672465  581665 main.go:141] libmachine: (old-k8s-version-256554)     
	I1008 18:58:05.672477  581665 main.go:141] libmachine: (old-k8s-version-256554)   </features>
	I1008 18:58:05.672489  581665 main.go:141] libmachine: (old-k8s-version-256554)   <cpu mode='host-passthrough'>
	I1008 18:58:05.672500  581665 main.go:141] libmachine: (old-k8s-version-256554)   
	I1008 18:58:05.672510  581665 main.go:141] libmachine: (old-k8s-version-256554)   </cpu>
	I1008 18:58:05.672519  581665 main.go:141] libmachine: (old-k8s-version-256554)   <os>
	I1008 18:58:05.672530  581665 main.go:141] libmachine: (old-k8s-version-256554)     <type>hvm</type>
	I1008 18:58:05.672561  581665 main.go:141] libmachine: (old-k8s-version-256554)     <boot dev='cdrom'/>
	I1008 18:58:05.672589  581665 main.go:141] libmachine: (old-k8s-version-256554)     <boot dev='hd'/>
	I1008 18:58:05.672602  581665 main.go:141] libmachine: (old-k8s-version-256554)     <bootmenu enable='no'/>
	I1008 18:58:05.672627  581665 main.go:141] libmachine: (old-k8s-version-256554)   </os>
	I1008 18:58:05.672638  581665 main.go:141] libmachine: (old-k8s-version-256554)   <devices>
	I1008 18:58:05.672658  581665 main.go:141] libmachine: (old-k8s-version-256554)     <disk type='file' device='cdrom'>
	I1008 18:58:05.672685  581665 main.go:141] libmachine: (old-k8s-version-256554)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/boot2docker.iso'/>
	I1008 18:58:05.672701  581665 main.go:141] libmachine: (old-k8s-version-256554)       <target dev='hdc' bus='scsi'/>
	I1008 18:58:05.672716  581665 main.go:141] libmachine: (old-k8s-version-256554)       <readonly/>
	I1008 18:58:05.672728  581665 main.go:141] libmachine: (old-k8s-version-256554)     </disk>
	I1008 18:58:05.672739  581665 main.go:141] libmachine: (old-k8s-version-256554)     <disk type='file' device='disk'>
	I1008 18:58:05.672760  581665 main.go:141] libmachine: (old-k8s-version-256554)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 18:58:05.672783  581665 main.go:141] libmachine: (old-k8s-version-256554)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/old-k8s-version-256554.rawdisk'/>
	I1008 18:58:05.672796  581665 main.go:141] libmachine: (old-k8s-version-256554)       <target dev='hda' bus='virtio'/>
	I1008 18:58:05.672811  581665 main.go:141] libmachine: (old-k8s-version-256554)     </disk>
	I1008 18:58:05.672824  581665 main.go:141] libmachine: (old-k8s-version-256554)     <interface type='network'>
	I1008 18:58:05.672836  581665 main.go:141] libmachine: (old-k8s-version-256554)       <source network='mk-old-k8s-version-256554'/>
	I1008 18:58:05.672854  581665 main.go:141] libmachine: (old-k8s-version-256554)       <model type='virtio'/>
	I1008 18:58:05.672864  581665 main.go:141] libmachine: (old-k8s-version-256554)     </interface>
	I1008 18:58:05.672876  581665 main.go:141] libmachine: (old-k8s-version-256554)     <interface type='network'>
	I1008 18:58:05.672891  581665 main.go:141] libmachine: (old-k8s-version-256554)       <source network='default'/>
	I1008 18:58:05.672904  581665 main.go:141] libmachine: (old-k8s-version-256554)       <model type='virtio'/>
	I1008 18:58:05.672921  581665 main.go:141] libmachine: (old-k8s-version-256554)     </interface>
	I1008 18:58:05.672934  581665 main.go:141] libmachine: (old-k8s-version-256554)     <serial type='pty'>
	I1008 18:58:05.672944  581665 main.go:141] libmachine: (old-k8s-version-256554)       <target port='0'/>
	I1008 18:58:05.672954  581665 main.go:141] libmachine: (old-k8s-version-256554)     </serial>
	I1008 18:58:05.672964  581665 main.go:141] libmachine: (old-k8s-version-256554)     <console type='pty'>
	I1008 18:58:05.672977  581665 main.go:141] libmachine: (old-k8s-version-256554)       <target type='serial' port='0'/>
	I1008 18:58:05.672987  581665 main.go:141] libmachine: (old-k8s-version-256554)     </console>
	I1008 18:58:05.673000  581665 main.go:141] libmachine: (old-k8s-version-256554)     <rng model='virtio'>
	I1008 18:58:05.673039  581665 main.go:141] libmachine: (old-k8s-version-256554)       <backend model='random'>/dev/random</backend>
	I1008 18:58:05.673059  581665 main.go:141] libmachine: (old-k8s-version-256554)     </rng>
	I1008 18:58:05.673070  581665 main.go:141] libmachine: (old-k8s-version-256554)     
	I1008 18:58:05.673086  581665 main.go:141] libmachine: (old-k8s-version-256554)     
	I1008 18:58:05.673096  581665 main.go:141] libmachine: (old-k8s-version-256554)   </devices>
	I1008 18:58:05.673107  581665 main.go:141] libmachine: (old-k8s-version-256554) </domain>
	I1008 18:58:05.673117  581665 main.go:141] libmachine: (old-k8s-version-256554) 
	I1008 18:58:05.677254  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:47:9e:43 in network default
	I1008 18:58:05.677826  581665 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 18:58:05.677901  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:05.678685  581665 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 18:58:05.678998  581665 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 18:58:05.679472  581665 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 18:58:05.680536  581665 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 18:58:06.963402  581665 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 18:58:06.964346  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:06.964824  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:06.964875  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:06.964819  581847 retry.go:31] will retry after 256.421575ms: waiting for machine to come up
	I1008 18:58:07.223487  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:07.224089  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:07.224125  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:07.224041  581847 retry.go:31] will retry after 371.51213ms: waiting for machine to come up
	I1008 18:58:07.597874  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:07.598461  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:07.598490  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:07.598419  581847 retry.go:31] will retry after 465.599367ms: waiting for machine to come up
	I1008 18:58:08.066206  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:08.066882  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:08.066914  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:08.066819  581847 retry.go:31] will retry after 499.752061ms: waiting for machine to come up
	I1008 18:58:08.568620  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:08.569079  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:08.569111  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:08.569031  581847 retry.go:31] will retry after 640.966163ms: waiting for machine to come up
	I1008 18:58:09.211852  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:09.212282  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:09.212311  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:09.212227  581847 retry.go:31] will retry after 802.324571ms: waiting for machine to come up
	I1008 18:58:10.016076  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:10.016553  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:10.016608  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:10.016492  581847 retry.go:31] will retry after 1.083825247s: waiting for machine to come up
	I1008 18:58:11.102202  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:11.102641  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:11.102667  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:11.102569  581847 retry.go:31] will retry after 997.163076ms: waiting for machine to come up
	I1008 18:58:12.101782  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:12.102274  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:12.102298  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:12.102230  581847 retry.go:31] will retry after 1.862197763s: waiting for machine to come up
	I1008 18:58:13.965928  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:13.966404  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:13.966433  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:13.966355  581847 retry.go:31] will retry after 1.631274937s: waiting for machine to come up
	I1008 18:58:15.599535  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:15.600035  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:15.600065  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:15.599981  581847 retry.go:31] will retry after 1.798067458s: waiting for machine to come up
	I1008 18:58:17.399207  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:17.399647  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:17.399674  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:17.399618  581847 retry.go:31] will retry after 3.354881804s: waiting for machine to come up
	I1008 18:58:20.755679  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:20.756124  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 18:58:20.756149  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 18:58:20.756052  581847 retry.go:31] will retry after 4.393056134s: waiting for machine to come up
	I1008 18:58:25.154583  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.155056  581665 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 18:58:25.155079  581665 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 18:58:25.155089  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.155505  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554
	I1008 18:58:25.229058  581665 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 18:58:25.229103  581665 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 18:58:25.229113  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 18:58:25.231869  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.233025  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.233470  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.233705  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 18:58:25.233730  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 18:58:25.233762  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 18:58:25.233775  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 18:58:25.233807  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 18:58:25.358209  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 18:58:25.358509  581665 main.go:141] libmachine: (old-k8s-version-256554) KVM machine creation complete!
	I1008 18:58:25.358902  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 18:58:25.359495  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:25.359721  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:25.359858  581665 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 18:58:25.359898  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 18:58:25.361159  581665 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 18:58:25.361175  581665 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 18:58:25.361183  581665 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 18:58:25.361191  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:25.363486  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.363888  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.363917  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.364033  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:25.364241  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.364418  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.364583  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:25.364749  581665 main.go:141] libmachine: Using SSH client type: native
	I1008 18:58:25.364975  581665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 18:58:25.364988  581665 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 18:58:25.477424  581665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:58:25.477450  581665 main.go:141] libmachine: Detecting the provisioner...
	I1008 18:58:25.477461  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:25.480239  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.480569  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.480614  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.480759  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:25.480953  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.481086  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.481218  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:25.481406  581665 main.go:141] libmachine: Using SSH client type: native
	I1008 18:58:25.481610  581665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 18:58:25.481622  581665 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 18:58:25.591141  581665 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 18:58:25.591242  581665 main.go:141] libmachine: found compatible host: buildroot
	I1008 18:58:25.591256  581665 main.go:141] libmachine: Provisioning with buildroot...
	I1008 18:58:25.591270  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 18:58:25.591534  581665 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 18:58:25.591562  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 18:58:25.591763  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:25.594651  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.595038  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.595075  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.595239  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:25.595447  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.595608  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.595745  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:25.595889  581665 main.go:141] libmachine: Using SSH client type: native
	I1008 18:58:25.596094  581665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 18:58:25.596109  581665 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 18:58:25.720661  581665 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 18:58:25.720698  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:25.723584  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.723925  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.723952  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.724092  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:25.724343  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.724521  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.724649  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:25.724807  581665 main.go:141] libmachine: Using SSH client type: native
	I1008 18:58:25.725043  581665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 18:58:25.725072  581665 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 18:58:25.842529  581665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 18:58:25.842561  581665 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 18:58:25.842583  581665 buildroot.go:174] setting up certificates
	I1008 18:58:25.842594  581665 provision.go:84] configureAuth start
	I1008 18:58:25.842605  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 18:58:25.842919  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 18:58:25.845528  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.845873  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.845899  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.846061  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:25.848083  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.848398  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.848440  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.848566  581665 provision.go:143] copyHostCerts
	I1008 18:58:25.848632  581665 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 18:58:25.848645  581665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 18:58:25.848710  581665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 18:58:25.848850  581665 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 18:58:25.848862  581665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 18:58:25.848892  581665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 18:58:25.848984  581665 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 18:58:25.848994  581665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 18:58:25.849021  581665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 18:58:25.849118  581665 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 18:58:25.986114  581665 provision.go:177] copyRemoteCerts
	I1008 18:58:25.986186  581665 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 18:58:25.986211  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:25.989133  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.989447  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:25.989478  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:25.989635  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:25.989823  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:25.989949  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:25.990074  581665 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 18:58:26.077062  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 18:58:26.101031  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 18:58:26.123473  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 18:58:26.145401  581665 provision.go:87] duration metric: took 302.79475ms to configureAuth
	I1008 18:58:26.145438  581665 buildroot.go:189] setting minikube options for container-runtime
	I1008 18:58:26.145660  581665 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 18:58:26.145752  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:26.148249  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.148554  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.148583  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.148774  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:26.148973  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.149138  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.149291  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:26.149449  581665 main.go:141] libmachine: Using SSH client type: native
	I1008 18:58:26.149731  581665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 18:58:26.149760  581665 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 18:58:26.372594  581665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 18:58:26.372657  581665 main.go:141] libmachine: Checking connection to Docker...
	I1008 18:58:26.372672  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetURL
	I1008 18:58:26.374029  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using libvirt version 6000000
	I1008 18:58:26.376174  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.376471  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.376512  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.376666  581665 main.go:141] libmachine: Docker is up and running!
	I1008 18:58:26.376693  581665 main.go:141] libmachine: Reticulating splines...
	I1008 18:58:26.376701  581665 client.go:171] duration metric: took 21.145160885s to LocalClient.Create
	I1008 18:58:26.376730  581665 start.go:167] duration metric: took 21.145250869s to libmachine.API.Create "old-k8s-version-256554"
	I1008 18:58:26.376744  581665 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 18:58:26.376769  581665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 18:58:26.376795  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:26.377055  581665 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 18:58:26.377080  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:26.379056  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.379353  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.379376  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.379483  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:26.379680  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.379831  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:26.379964  581665 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 18:58:26.464404  581665 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 18:58:26.468506  581665 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 18:58:26.468536  581665 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 18:58:26.468610  581665 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 18:58:26.468684  581665 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 18:58:26.468773  581665 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 18:58:26.477606  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:58:26.500234  581665 start.go:296] duration metric: took 123.464455ms for postStartSetup
	I1008 18:58:26.500302  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 18:58:26.500909  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 18:58:26.503573  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.503951  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.503988  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.504272  581665 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 18:58:26.504454  581665 start.go:128] duration metric: took 21.294659297s to createHost
	I1008 18:58:26.504477  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:26.506660  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.506932  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.506957  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.507101  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:26.507285  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.507468  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.507591  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:26.507723  581665 main.go:141] libmachine: Using SSH client type: native
	I1008 18:58:26.507898  581665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 18:58:26.507909  581665 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 18:58:26.618867  581665 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728413906.576657857
	
	I1008 18:58:26.618897  581665 fix.go:216] guest clock: 1728413906.576657857
	I1008 18:58:26.618907  581665 fix.go:229] Guest: 2024-10-08 18:58:26.576657857 +0000 UTC Remote: 2024-10-08 18:58:26.504464676 +0000 UTC m=+30.530220106 (delta=72.193181ms)
	I1008 18:58:26.618931  581665 fix.go:200] guest clock delta is within tolerance: 72.193181ms
	I1008 18:58:26.618938  581665 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 21.40931623s
	I1008 18:58:26.618973  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:26.619318  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 18:58:26.621933  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.622334  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.622358  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.622540  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:26.623041  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:26.623214  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 18:58:26.623323  581665 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 18:58:26.623368  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:26.623430  581665 ssh_runner.go:195] Run: cat /version.json
	I1008 18:58:26.623457  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 18:58:26.626159  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.626516  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.626582  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.626599  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.626722  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:26.626919  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.626961  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:26.626985  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:26.627053  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:26.627159  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 18:58:26.627214  581665 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 18:58:26.627316  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 18:58:26.627454  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 18:58:26.627617  581665 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 18:58:26.715407  581665 ssh_runner.go:195] Run: systemctl --version
	I1008 18:58:26.746291  581665 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 18:58:26.904169  581665 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 18:58:26.911489  581665 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 18:58:26.911556  581665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 18:58:26.927561  581665 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 18:58:26.927593  581665 start.go:495] detecting cgroup driver to use...
	I1008 18:58:26.927681  581665 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 18:58:26.949570  581665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 18:58:26.964564  581665 docker.go:217] disabling cri-docker service (if available) ...
	I1008 18:58:26.964629  581665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 18:58:26.980071  581665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 18:58:26.994387  581665 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 18:58:27.112803  581665 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 18:58:27.265157  581665 docker.go:233] disabling docker service ...
	I1008 18:58:27.265246  581665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 18:58:27.279918  581665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 18:58:27.292904  581665 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 18:58:27.425119  581665 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 18:58:27.529822  581665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 18:58:27.544028  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 18:58:27.561579  581665 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 18:58:27.561657  581665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:58:27.573603  581665 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 18:58:27.573657  581665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:58:27.584025  581665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:58:27.594589  581665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 18:58:27.604741  581665 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 18:58:27.615481  581665 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 18:58:27.624899  581665 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 18:58:27.624951  581665 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 18:58:27.638055  581665 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 18:58:27.648245  581665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:58:27.765845  581665 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 18:58:27.875175  581665 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 18:58:27.875253  581665 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 18:58:27.881546  581665 start.go:563] Will wait 60s for crictl version
	I1008 18:58:27.881619  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:27.886334  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 18:58:27.927916  581665 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 18:58:27.928012  581665 ssh_runner.go:195] Run: crio --version
	I1008 18:58:27.956796  581665 ssh_runner.go:195] Run: crio --version
	I1008 18:58:27.988572  581665 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 18:58:27.993971  581665 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 18:58:27.997348  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:27.997753  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 19:58:19 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 18:58:27.997779  581665 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 18:58:27.998038  581665 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 18:58:28.005721  581665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:58:28.021734  581665 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 18:58:28.021844  581665 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 18:58:28.021885  581665 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:58:28.056874  581665 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 18:58:28.056951  581665 ssh_runner.go:195] Run: which lz4
	I1008 18:58:28.061043  581665 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 18:58:28.065115  581665 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 18:58:28.065149  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 18:58:29.705944  581665 crio.go:462] duration metric: took 1.644926409s to copy over tarball
	I1008 18:58:29.706020  581665 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 18:58:32.295409  581665 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.589350308s)
	I1008 18:58:32.295448  581665 crio.go:469] duration metric: took 2.589472755s to extract the tarball
	I1008 18:58:32.295456  581665 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 18:58:32.341517  581665 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 18:58:32.392136  581665 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 18:58:32.392165  581665 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 18:58:32.392263  581665 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:58:32.392362  581665 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:32.392267  581665 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 18:58:32.392298  581665 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 18:58:32.392297  581665 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.392316  581665 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:32.392320  581665 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.392339  581665 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:32.393714  581665 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:32.393828  581665 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:32.393950  581665 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:32.393972  581665 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 18:58:32.393958  581665 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:58:32.393984  581665 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 18:58:32.394053  581665 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.394296  581665 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.560918  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.565915  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.579313  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:32.590474  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:32.595495  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:32.597232  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 18:58:32.623858  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 18:58:32.637366  581665 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 18:58:32.637434  581665 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.637487  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.716216  581665 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 18:58:32.716269  581665 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.716321  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.746528  581665 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 18:58:32.746552  581665 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 18:58:32.746579  581665 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:32.746588  581665 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:32.746632  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.746632  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.746633  581665 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 18:58:32.746789  581665 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:32.746829  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.750877  581665 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 18:58:32.750902  581665 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 18:58:32.750930  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.764638  581665 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 18:58:32.764677  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.764680  581665 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 18:58:32.764764  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.764767  581665 ssh_runner.go:195] Run: which crictl
	I1008 18:58:32.767726  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:32.767763  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:32.767772  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:32.767823  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 18:58:32.827509  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.833443  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.833485  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 18:58:32.955001  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 18:58:32.955064  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:32.955114  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:32.955152  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:32.955190  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 18:58:32.960251  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 18:58:32.960327  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 18:58:33.094094  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 18:58:33.110037  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 18:58:33.119281  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 18:58:33.119390  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 18:58:33.119411  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 18:58:33.119464  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 18:58:33.119516  581665 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 18:58:33.198821  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 18:58:33.206351  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 18:58:33.235216  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 18:58:33.235297  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 18:58:33.235318  581665 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 18:58:33.296260  581665 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 18:58:33.438906  581665 cache_images.go:92] duration metric: took 1.04672039s to LoadCachedImages
	W1008 18:58:33.439017  581665 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1008 18:58:33.439044  581665 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 18:58:33.439172  581665 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 18:58:33.439268  581665 ssh_runner.go:195] Run: crio config
	I1008 18:58:33.487124  581665 cni.go:84] Creating CNI manager for ""
	I1008 18:58:33.487148  581665 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 18:58:33.487167  581665 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 18:58:33.487189  581665 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 18:58:33.487353  581665 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 18:58:33.487436  581665 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 18:58:33.497886  581665 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 18:58:33.497968  581665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 18:58:33.511755  581665 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 18:58:33.532397  581665 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 18:58:33.549272  581665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 18:58:33.566085  581665 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 18:58:33.570201  581665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 18:58:33.584986  581665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 18:58:33.712215  581665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 18:58:33.729427  581665 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 18:58:33.729450  581665 certs.go:194] generating shared ca certs ...
	I1008 18:58:33.729472  581665 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:33.729655  581665 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 18:58:33.729710  581665 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 18:58:33.729722  581665 certs.go:256] generating profile certs ...
	I1008 18:58:33.729801  581665 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 18:58:33.729829  581665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.crt with IP's: []
	I1008 18:58:33.975512  581665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.crt ...
	I1008 18:58:33.975549  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.crt: {Name:mk2a71d029720cd910a3e014ede960d8a70ad3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:33.975741  581665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key ...
	I1008 18:58:33.975759  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key: {Name:mk8eed32fdd94859bcd5eef47f4673ae2a9c0d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:33.975849  581665 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 18:58:33.975865  581665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt.cd4ca3ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.90]
	I1008 18:58:34.300418  581665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt.cd4ca3ea ...
	I1008 18:58:34.300465  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt.cd4ca3ea: {Name:mke1420187dd6d06f362593b2babe3f503ef2f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:34.300663  581665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea ...
	I1008 18:58:34.300684  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea: {Name:mke05ed674b3cc9ee860d5f6644ee12599150df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:34.300793  581665 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt.cd4ca3ea -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt
	I1008 18:58:34.300907  581665 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key
	I1008 18:58:34.300992  581665 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 18:58:34.301014  581665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt with IP's: []
	I1008 18:58:34.475079  581665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt ...
	I1008 18:58:34.475117  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt: {Name:mkd1cf4e5cbdbc36ca32620740933711ce396a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:34.475300  581665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key ...
	I1008 18:58:34.475319  581665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key: {Name:mk0a3d5535244f50826f941c3e0e9cb2e43b3f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 18:58:34.475514  581665 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 18:58:34.475566  581665 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 18:58:34.475581  581665 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 18:58:34.475620  581665 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 18:58:34.475653  581665 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 18:58:34.475686  581665 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 18:58:34.475742  581665 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 18:58:34.476430  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 18:58:34.502647  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 18:58:34.529724  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 18:58:34.560543  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 18:58:34.584787  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 18:58:34.608641  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 18:58:34.634097  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 18:58:34.657355  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 18:58:34.680626  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 18:58:34.707581  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 18:58:34.731642  581665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 18:58:34.758599  581665 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 18:58:34.779049  581665 ssh_runner.go:195] Run: openssl version
	I1008 18:58:34.786744  581665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 18:58:34.800784  581665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:58:34.805625  581665 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:58:34.805685  581665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 18:58:34.811351  581665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 18:58:34.821692  581665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 18:58:34.831949  581665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 18:58:34.836848  581665 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 18:58:34.836905  581665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 18:58:34.842930  581665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 18:58:34.854110  581665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 18:58:34.864859  581665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 18:58:34.870565  581665 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 18:58:34.870616  581665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 18:58:34.876296  581665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 18:58:34.887440  581665 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 18:58:34.891779  581665 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 18:58:34.891841  581665 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 18:58:34.891921  581665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 18:58:34.891959  581665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 18:58:34.929660  581665 cri.go:89] found id: ""
	I1008 18:58:34.929725  581665 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 18:58:34.940518  581665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 18:58:34.950905  581665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 18:58:34.960773  581665 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 18:58:34.960799  581665 kubeadm.go:157] found existing configuration files:
	
	I1008 18:58:34.960850  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 18:58:34.970642  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 18:58:34.970694  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 18:58:34.980461  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 18:58:34.989821  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 18:58:34.989875  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 18:58:34.999875  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 18:58:35.009051  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 18:58:35.009105  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 18:58:35.018226  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 18:58:35.027039  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 18:58:35.027094  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 18:58:35.036751  581665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 18:58:35.311040  581665 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:00:33.302197  581665 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:00:33.302307  581665 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:00:33.303908  581665 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:00:33.303983  581665 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:00:33.304084  581665 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:00:33.304217  581665 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:00:33.304341  581665 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:00:33.304444  581665 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:00:33.306130  581665 out.go:235]   - Generating certificates and keys ...
	I1008 19:00:33.306226  581665 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:00:33.306294  581665 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:00:33.306399  581665 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 19:00:33.306457  581665 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 19:00:33.306536  581665 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 19:00:33.306595  581665 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 19:00:33.306641  581665 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 19:00:33.306799  581665 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-256554] and IPs [192.168.39.90 127.0.0.1 ::1]
	I1008 19:00:33.306847  581665 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 19:00:33.306992  581665 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-256554] and IPs [192.168.39.90 127.0.0.1 ::1]
	I1008 19:00:33.307097  581665 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 19:00:33.307183  581665 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 19:00:33.307241  581665 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 19:00:33.307327  581665 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:00:33.307389  581665 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:00:33.307449  581665 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:00:33.307527  581665 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:00:33.307604  581665 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:00:33.307757  581665 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:00:33.307866  581665 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:00:33.307928  581665 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:00:33.308034  581665 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:00:33.309400  581665 out.go:235]   - Booting up control plane ...
	I1008 19:00:33.309480  581665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:00:33.309567  581665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:00:33.309646  581665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:00:33.309739  581665 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:00:33.309878  581665 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:00:33.309932  581665 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:00:33.310002  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:00:33.310157  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:00:33.310224  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:00:33.310404  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:00:33.310463  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:00:33.310647  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:00:33.310709  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:00:33.310903  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:00:33.310974  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:00:33.311169  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:00:33.311177  581665 kubeadm.go:310] 
	I1008 19:00:33.311221  581665 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:00:33.311255  581665 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:00:33.311261  581665 kubeadm.go:310] 
	I1008 19:00:33.311307  581665 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:00:33.311360  581665 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:00:33.311508  581665 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:00:33.311518  581665 kubeadm.go:310] 
	I1008 19:00:33.311673  581665 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:00:33.311721  581665 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:00:33.311775  581665 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:00:33.311784  581665 kubeadm.go:310] 
	I1008 19:00:33.311937  581665 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:00:33.312058  581665 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:00:33.312072  581665 kubeadm.go:310] 
	I1008 19:00:33.312216  581665 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:00:33.312333  581665 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:00:33.312407  581665 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:00:33.312469  581665 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:00:33.312545  581665 kubeadm.go:310] 
	W1008 19:00:33.312593  581665 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-256554] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-256554] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-256554] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-256554] and IPs [192.168.39.90 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:00:33.312634  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:00:33.976476  581665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:00:33.992386  581665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:00:34.003477  581665 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:00:34.003499  581665 kubeadm.go:157] found existing configuration files:
	
	I1008 19:00:34.003535  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:00:34.014533  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:00:34.014590  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:00:34.025242  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:00:34.035360  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:00:34.035417  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:00:34.045759  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:00:34.055946  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:00:34.055986  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:00:34.066369  581665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:00:34.076343  581665 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:00:34.076381  581665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:00:34.086928  581665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:00:34.170648  581665 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:00:34.170710  581665 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:00:34.323409  581665 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:00:34.323844  581665 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:00:34.324043  581665 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:00:34.511038  581665 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:00:34.513387  581665 out.go:235]   - Generating certificates and keys ...
	I1008 19:00:34.513481  581665 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:00:34.513540  581665 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:00:34.513670  581665 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:00:34.513748  581665 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:00:34.513808  581665 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:00:34.513862  581665 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:00:34.513915  581665 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:00:34.513974  581665 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:00:34.514044  581665 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:00:34.514143  581665 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:00:34.514208  581665 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:00:34.514298  581665 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:00:34.658864  581665 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:00:34.856656  581665 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:00:35.111330  581665 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:00:35.368360  581665 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:00:35.384543  581665 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:00:35.385721  581665 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:00:35.385783  581665 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:00:35.546917  581665 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:00:35.549458  581665 out.go:235]   - Booting up control plane ...
	I1008 19:00:35.549599  581665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:00:35.551608  581665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:00:35.552602  581665 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:00:35.553417  581665 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:00:35.556111  581665 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:01:15.559081  581665 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:01:15.559530  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:01:15.559723  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:01:20.560616  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:01:20.560836  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:01:30.561783  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:01:30.562012  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:01:50.563782  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:01:50.564060  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:02:30.563683  581665 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:02:30.563890  581665 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:02:30.563901  581665 kubeadm.go:310] 
	I1008 19:02:30.563958  581665 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:02:30.564044  581665 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:02:30.564075  581665 kubeadm.go:310] 
	I1008 19:02:30.564129  581665 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:02:30.564173  581665 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:02:30.564318  581665 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:02:30.564328  581665 kubeadm.go:310] 
	I1008 19:02:30.564462  581665 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:02:30.564527  581665 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:02:30.564591  581665 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:02:30.564601  581665 kubeadm.go:310] 
	I1008 19:02:30.564734  581665 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:02:30.564863  581665 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:02:30.564884  581665 kubeadm.go:310] 
	I1008 19:02:30.564999  581665 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:02:30.565079  581665 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:02:30.565162  581665 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:02:30.565261  581665 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:02:30.565277  581665 kubeadm.go:310] 
	I1008 19:02:30.565493  581665 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:02:30.565616  581665 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:02:30.565727  581665 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:02:30.565823  581665 kubeadm.go:394] duration metric: took 3m55.673985109s to StartCluster
	I1008 19:02:30.565878  581665 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:02:30.565945  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:02:30.608095  581665 cri.go:89] found id: ""
	I1008 19:02:30.608124  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.608135  581665 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:02:30.608143  581665 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:02:30.608209  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:02:30.641680  581665 cri.go:89] found id: ""
	I1008 19:02:30.641711  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.641720  581665 logs.go:284] No container was found matching "etcd"
	I1008 19:02:30.641728  581665 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:02:30.641793  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:02:30.673246  581665 cri.go:89] found id: ""
	I1008 19:02:30.673275  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.673284  581665 logs.go:284] No container was found matching "coredns"
	I1008 19:02:30.673290  581665 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:02:30.673345  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:02:30.705872  581665 cri.go:89] found id: ""
	I1008 19:02:30.705902  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.705910  581665 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:02:30.705925  581665 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:02:30.705974  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:02:30.737919  581665 cri.go:89] found id: ""
	I1008 19:02:30.737945  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.737953  581665 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:02:30.737964  581665 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:02:30.738024  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:02:30.771651  581665 cri.go:89] found id: ""
	I1008 19:02:30.771683  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.771693  581665 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:02:30.771699  581665 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:02:30.771765  581665 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:02:30.805268  581665 cri.go:89] found id: ""
	I1008 19:02:30.805294  581665 logs.go:282] 0 containers: []
	W1008 19:02:30.805301  581665 logs.go:284] No container was found matching "kindnet"
	I1008 19:02:30.805323  581665 logs.go:123] Gathering logs for kubelet ...
	I1008 19:02:30.805337  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:02:30.851461  581665 logs.go:123] Gathering logs for dmesg ...
	I1008 19:02:30.851486  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:02:30.863909  581665 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:02:30.863934  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:02:30.977196  581665 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:02:30.977225  581665 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:02:30.977239  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:02:31.078188  581665 logs.go:123] Gathering logs for container status ...
	I1008 19:02:31.078223  581665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 19:02:31.116183  581665 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:02:31.116251  581665 out.go:270] * 
	* 
	W1008 19:02:31.116309  581665 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:02:31.116324  581665 out.go:270] * 
	* 
	W1008 19:02:31.117123  581665 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:02:31.120500  581665 out.go:201] 
	W1008 19:02:31.121560  581665 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:02:31.121607  581665 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:02:31.121635  581665 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:02:31.123077  581665 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-256554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 6 (235.88422ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:02:31.401614  584430 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-256554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (275.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-966632 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-966632 --alsologtostderr -v=3: exit status 82 (2m0.499728941s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-966632"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:59:52.954269  582943 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:59:52.954449  582943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:59:52.954464  582943 out.go:358] Setting ErrFile to fd 2...
	I1008 18:59:52.954470  582943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:59:52.954734  582943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:59:52.955064  582943 out.go:352] Setting JSON to false
	I1008 18:59:52.955163  582943 mustload.go:65] Loading cluster: no-preload-966632
	I1008 18:59:52.955627  582943 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:59:52.955744  582943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 18:59:52.955990  582943 mustload.go:65] Loading cluster: no-preload-966632
	I1008 18:59:52.956151  582943 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:59:52.956200  582943 stop.go:39] StopHost: no-preload-966632
	I1008 18:59:52.956726  582943 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 18:59:52.956794  582943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:59:52.972791  582943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33865
	I1008 18:59:52.973278  582943 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:59:52.973827  582943 main.go:141] libmachine: Using API Version  1
	I1008 18:59:52.973851  582943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:59:52.974236  582943 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:59:52.976557  582943 out.go:177] * Stopping node "no-preload-966632"  ...
	I1008 18:59:52.977682  582943 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 18:59:52.977715  582943 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 18:59:52.977907  582943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 18:59:52.977935  582943 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 18:59:52.980885  582943 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 18:59:52.981275  582943 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 19:58:42 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 18:59:52.981319  582943 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 18:59:52.981431  582943 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 18:59:52.981609  582943 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 18:59:52.981779  582943 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 18:59:52.981939  582943 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 18:59:53.073959  582943 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 18:59:53.138294  582943 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 18:59:53.198538  582943 main.go:141] libmachine: Stopping "no-preload-966632"...
	I1008 18:59:53.198569  582943 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 18:59:53.200382  582943 main.go:141] libmachine: (no-preload-966632) Calling .Stop
	I1008 18:59:53.204068  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 0/120
	I1008 18:59:54.205530  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 1/120
	I1008 18:59:55.206984  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 2/120
	I1008 18:59:56.208277  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 3/120
	I1008 18:59:57.209559  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 4/120
	I1008 18:59:58.211468  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 5/120
	I1008 18:59:59.212756  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 6/120
	I1008 19:00:00.214731  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 7/120
	I1008 19:00:01.217093  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 8/120
	I1008 19:00:02.218659  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 9/120
	I1008 19:00:03.221214  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 10/120
	I1008 19:00:04.222621  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 11/120
	I1008 19:00:05.224186  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 12/120
	I1008 19:00:06.225947  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 13/120
	I1008 19:00:07.227320  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 14/120
	I1008 19:00:08.229584  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 15/120
	I1008 19:00:09.230941  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 16/120
	I1008 19:00:10.233041  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 17/120
	I1008 19:00:11.234209  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 18/120
	I1008 19:00:12.235837  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 19/120
	I1008 19:00:13.237575  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 20/120
	I1008 19:00:14.239572  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 21/120
	I1008 19:00:15.241533  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 22/120
	I1008 19:00:16.242743  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 23/120
	I1008 19:00:17.245217  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 24/120
	I1008 19:00:18.247770  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 25/120
	I1008 19:00:19.249230  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 26/120
	I1008 19:00:20.250790  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 27/120
	I1008 19:00:21.252875  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 28/120
	I1008 19:00:22.254593  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 29/120
	I1008 19:00:23.256804  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 30/120
	I1008 19:00:24.258375  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 31/120
	I1008 19:00:25.259805  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 32/120
	I1008 19:00:26.261393  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 33/120
	I1008 19:00:27.262752  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 34/120
	I1008 19:00:28.264407  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 35/120
	I1008 19:00:29.265668  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 36/120
	I1008 19:00:30.267104  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 37/120
	I1008 19:00:31.268696  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 38/120
	I1008 19:00:32.270400  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 39/120
	I1008 19:00:33.272320  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 40/120
	I1008 19:00:34.273753  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 41/120
	I1008 19:00:35.275859  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 42/120
	I1008 19:00:36.277015  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 43/120
	I1008 19:00:37.278457  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 44/120
	I1008 19:00:38.280111  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 45/120
	I1008 19:00:39.281594  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 46/120
	I1008 19:00:40.283605  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 47/120
	I1008 19:00:41.285705  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 48/120
	I1008 19:00:42.287060  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 49/120
	I1008 19:00:43.289190  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 50/120
	I1008 19:00:44.290636  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 51/120
	I1008 19:00:45.292325  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 52/120
	I1008 19:00:46.293686  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 53/120
	I1008 19:00:47.295333  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 54/120
	I1008 19:00:48.297582  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 55/120
	I1008 19:00:49.299044  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 56/120
	I1008 19:00:50.301007  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 57/120
	I1008 19:00:51.302437  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 58/120
	I1008 19:00:52.303888  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 59/120
	I1008 19:00:53.305817  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 60/120
	I1008 19:00:54.307936  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 61/120
	I1008 19:00:55.309387  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 62/120
	I1008 19:00:56.311218  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 63/120
	I1008 19:00:57.312873  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 64/120
	I1008 19:00:58.314559  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 65/120
	I1008 19:00:59.316736  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 66/120
	I1008 19:01:00.318014  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 67/120
	I1008 19:01:01.319309  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 68/120
	I1008 19:01:02.320746  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 69/120
	I1008 19:01:03.322569  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 70/120
	I1008 19:01:04.323912  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 71/120
	I1008 19:01:05.325384  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 72/120
	I1008 19:01:06.326640  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 73/120
	I1008 19:01:07.328044  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 74/120
	I1008 19:01:08.329939  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 75/120
	I1008 19:01:09.331221  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 76/120
	I1008 19:01:10.332865  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 77/120
	I1008 19:01:11.334596  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 78/120
	I1008 19:01:12.336864  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 79/120
	I1008 19:01:13.338664  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 80/120
	I1008 19:01:14.340719  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 81/120
	I1008 19:01:15.342102  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 82/120
	I1008 19:01:16.343658  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 83/120
	I1008 19:01:17.344865  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 84/120
	I1008 19:01:18.346746  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 85/120
	I1008 19:01:19.349079  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 86/120
	I1008 19:01:20.350457  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 87/120
	I1008 19:01:21.351850  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 88/120
	I1008 19:01:22.353053  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 89/120
	I1008 19:01:23.355218  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 90/120
	I1008 19:01:24.356474  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 91/120
	I1008 19:01:25.357842  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 92/120
	I1008 19:01:26.359229  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 93/120
	I1008 19:01:27.360467  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 94/120
	I1008 19:01:28.362427  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 95/120
	I1008 19:01:29.363566  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 96/120
	I1008 19:01:30.364949  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 97/120
	I1008 19:01:31.366290  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 98/120
	I1008 19:01:32.367632  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 99/120
	I1008 19:01:33.369470  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 100/120
	I1008 19:01:34.370594  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 101/120
	I1008 19:01:35.371767  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 102/120
	I1008 19:01:36.372902  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 103/120
	I1008 19:01:37.374242  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 104/120
	I1008 19:01:38.375747  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 105/120
	I1008 19:01:39.377139  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 106/120
	I1008 19:01:40.378606  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 107/120
	I1008 19:01:41.379844  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 108/120
	I1008 19:01:42.381295  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 109/120
	I1008 19:01:43.383488  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 110/120
	I1008 19:01:44.384682  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 111/120
	I1008 19:01:45.386067  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 112/120
	I1008 19:01:46.387290  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 113/120
	I1008 19:01:47.388670  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 114/120
	I1008 19:01:48.390895  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 115/120
	I1008 19:01:49.392399  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 116/120
	I1008 19:01:50.393583  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 117/120
	I1008 19:01:51.395744  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 118/120
	I1008 19:01:52.397024  582943 main.go:141] libmachine: (no-preload-966632) Waiting for machine to stop 119/120
	I1008 19:01:53.398132  582943 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1008 19:01:53.398194  582943 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1008 19:01:53.399794  582943 out.go:201] 
	W1008 19:01:53.400884  582943 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1008 19:01:53.400898  582943 out.go:270] * 
	* 
	W1008 19:01:53.404115  582943 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:01:53.405465  582943 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-966632 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632: exit status 3 (18.543990219s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:02:11.950666  584156 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host
	E1008 19:02:11.950689  584156 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-966632" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-783146 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-783146 --alsologtostderr -v=3: exit status 82 (2m0.492156346s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-783146"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 19:01:14.626176  583907 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:01:14.626311  583907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:01:14.626335  583907 out.go:358] Setting ErrFile to fd 2...
	I1008 19:01:14.626341  583907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:01:14.626531  583907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:01:14.626775  583907 out.go:352] Setting JSON to false
	I1008 19:01:14.626851  583907 mustload.go:65] Loading cluster: embed-certs-783146
	I1008 19:01:14.627201  583907 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:01:14.627270  583907 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:01:14.627436  583907 mustload.go:65] Loading cluster: embed-certs-783146
	I1008 19:01:14.627530  583907 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:01:14.627561  583907 stop.go:39] StopHost: embed-certs-783146
	I1008 19:01:14.627924  583907 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:01:14.627977  583907 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:01:14.644164  583907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1008 19:01:14.644614  583907 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:01:14.645157  583907 main.go:141] libmachine: Using API Version  1
	I1008 19:01:14.645182  583907 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:01:14.645529  583907 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:01:14.647693  583907 out.go:177] * Stopping node "embed-certs-783146"  ...
	I1008 19:01:14.649234  583907 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 19:01:14.649263  583907 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:01:14.649464  583907 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 19:01:14.649486  583907 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:01:14.652290  583907 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:01:14.652725  583907 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 19:59:50 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:01:14.652753  583907 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:01:14.652929  583907 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:01:14.653096  583907 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:01:14.653229  583907 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:01:14.653346  583907 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:01:14.759268  583907 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 19:01:14.813783  583907 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 19:01:14.871826  583907 main.go:141] libmachine: Stopping "embed-certs-783146"...
	I1008 19:01:14.871858  583907 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:01:14.873692  583907 main.go:141] libmachine: (embed-certs-783146) Calling .Stop
	I1008 19:01:14.877370  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 0/120
	I1008 19:01:15.878874  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 1/120
	I1008 19:01:16.880930  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 2/120
	I1008 19:01:17.882341  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 3/120
	I1008 19:01:18.884130  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 4/120
	I1008 19:01:19.886217  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 5/120
	I1008 19:01:20.887598  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 6/120
	I1008 19:01:21.889016  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 7/120
	I1008 19:01:22.890447  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 8/120
	I1008 19:01:23.891741  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 9/120
	I1008 19:01:24.893623  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 10/120
	I1008 19:01:25.895143  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 11/120
	I1008 19:01:26.896522  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 12/120
	I1008 19:01:27.897789  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 13/120
	I1008 19:01:28.899232  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 14/120
	I1008 19:01:29.900974  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 15/120
	I1008 19:01:30.902267  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 16/120
	I1008 19:01:31.903700  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 17/120
	I1008 19:01:32.905045  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 18/120
	I1008 19:01:33.906364  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 19/120
	I1008 19:01:34.908583  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 20/120
	I1008 19:01:35.909957  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 21/120
	I1008 19:01:36.911222  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 22/120
	I1008 19:01:37.912632  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 23/120
	I1008 19:01:38.913835  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 24/120
	I1008 19:01:39.915917  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 25/120
	I1008 19:01:40.917203  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 26/120
	I1008 19:01:41.918691  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 27/120
	I1008 19:01:42.920121  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 28/120
	I1008 19:01:43.921576  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 29/120
	I1008 19:01:44.923769  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 30/120
	I1008 19:01:45.925368  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 31/120
	I1008 19:01:46.926996  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 32/120
	I1008 19:01:47.928388  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 33/120
	I1008 19:01:48.929745  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 34/120
	I1008 19:01:49.931604  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 35/120
	I1008 19:01:50.933021  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 36/120
	I1008 19:01:51.934537  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 37/120
	I1008 19:01:52.935948  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 38/120
	I1008 19:01:53.937444  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 39/120
	I1008 19:01:54.939571  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 40/120
	I1008 19:01:55.941001  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 41/120
	I1008 19:01:56.942590  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 42/120
	I1008 19:01:57.943880  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 43/120
	I1008 19:01:58.945301  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 44/120
	I1008 19:01:59.947319  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 45/120
	I1008 19:02:00.948779  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 46/120
	I1008 19:02:01.950271  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 47/120
	I1008 19:02:02.951550  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 48/120
	I1008 19:02:03.953046  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 49/120
	I1008 19:02:04.955210  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 50/120
	I1008 19:02:05.956523  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 51/120
	I1008 19:02:06.957912  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 52/120
	I1008 19:02:07.959187  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 53/120
	I1008 19:02:08.960539  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 54/120
	I1008 19:02:09.962687  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 55/120
	I1008 19:02:10.964018  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 56/120
	I1008 19:02:11.965193  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 57/120
	I1008 19:02:12.966543  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 58/120
	I1008 19:02:13.968071  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 59/120
	I1008 19:02:14.969886  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 60/120
	I1008 19:02:15.971104  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 61/120
	I1008 19:02:16.972561  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 62/120
	I1008 19:02:17.974022  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 63/120
	I1008 19:02:18.975344  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 64/120
	I1008 19:02:19.977359  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 65/120
	I1008 19:02:20.978660  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 66/120
	I1008 19:02:21.979997  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 67/120
	I1008 19:02:22.981318  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 68/120
	I1008 19:02:23.982666  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 69/120
	I1008 19:02:24.984610  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 70/120
	I1008 19:02:25.985946  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 71/120
	I1008 19:02:26.987223  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 72/120
	I1008 19:02:27.988608  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 73/120
	I1008 19:02:28.990009  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 74/120
	I1008 19:02:29.991940  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 75/120
	I1008 19:02:30.993613  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 76/120
	I1008 19:02:31.995209  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 77/120
	I1008 19:02:32.996398  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 78/120
	I1008 19:02:33.997634  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 79/120
	I1008 19:02:34.999593  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 80/120
	I1008 19:02:36.000970  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 81/120
	I1008 19:02:37.002066  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 82/120
	I1008 19:02:38.003454  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 83/120
	I1008 19:02:39.004890  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 84/120
	I1008 19:02:40.006951  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 85/120
	I1008 19:02:41.008171  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 86/120
	I1008 19:02:42.009492  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 87/120
	I1008 19:02:43.010964  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 88/120
	I1008 19:02:44.012888  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 89/120
	I1008 19:02:45.015009  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 90/120
	I1008 19:02:46.016880  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 91/120
	I1008 19:02:47.018093  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 92/120
	I1008 19:02:48.019302  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 93/120
	I1008 19:02:49.020540  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 94/120
	I1008 19:02:50.022546  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 95/120
	I1008 19:02:51.024016  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 96/120
	I1008 19:02:52.025426  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 97/120
	I1008 19:02:53.026883  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 98/120
	I1008 19:02:54.028412  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 99/120
	I1008 19:02:55.030966  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 100/120
	I1008 19:02:56.032557  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 101/120
	I1008 19:02:57.034000  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 102/120
	I1008 19:02:58.035405  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 103/120
	I1008 19:02:59.036984  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 104/120
	I1008 19:03:00.038817  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 105/120
	I1008 19:03:01.040074  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 106/120
	I1008 19:03:02.041354  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 107/120
	I1008 19:03:03.042932  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 108/120
	I1008 19:03:04.044368  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 109/120
	I1008 19:03:05.046672  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 110/120
	I1008 19:03:06.048103  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 111/120
	I1008 19:03:07.049534  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 112/120
	I1008 19:03:08.050858  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 113/120
	I1008 19:03:09.052225  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 114/120
	I1008 19:03:10.054188  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 115/120
	I1008 19:03:11.055569  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 116/120
	I1008 19:03:12.056954  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 117/120
	I1008 19:03:13.058342  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 118/120
	I1008 19:03:14.059669  583907 main.go:141] libmachine: (embed-certs-783146) Waiting for machine to stop 119/120
	I1008 19:03:15.060823  583907 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1008 19:03:15.060881  583907 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1008 19:03:15.062668  583907 out.go:201] 
	W1008 19:03:15.063816  583907 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1008 19:03:15.063831  583907 out.go:270] * 
	* 
	W1008 19:03:15.067105  583907 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:03:15.068298  583907 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-783146 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146: exit status 3 (18.54552387s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:03:33.614705  584731 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host
	E1008 19:03:33.614726  584731 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-783146" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-142496 --alsologtostderr -v=3
E1008 19:01:38.895529  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-142496 --alsologtostderr -v=3: exit status 82 (2m0.466546637s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-142496"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 19:01:19.041580  583993 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:01:19.041689  583993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:01:19.041697  583993 out.go:358] Setting ErrFile to fd 2...
	I1008 19:01:19.041701  583993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:01:19.041894  583993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:01:19.042111  583993 out.go:352] Setting JSON to false
	I1008 19:01:19.042181  583993 mustload.go:65] Loading cluster: default-k8s-diff-port-142496
	I1008 19:01:19.042547  583993 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:01:19.042616  583993 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:01:19.042789  583993 mustload.go:65] Loading cluster: default-k8s-diff-port-142496
	I1008 19:01:19.042888  583993 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:01:19.042917  583993 stop.go:39] StopHost: default-k8s-diff-port-142496
	I1008 19:01:19.043274  583993 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:01:19.043317  583993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:01:19.058168  583993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I1008 19:01:19.058617  583993 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:01:19.059161  583993 main.go:141] libmachine: Using API Version  1
	I1008 19:01:19.059183  583993 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:01:19.059483  583993 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:01:19.061501  583993 out.go:177] * Stopping node "default-k8s-diff-port-142496"  ...
	I1008 19:01:19.062613  583993 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1008 19:01:19.062640  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:01:19.062845  583993 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1008 19:01:19.062870  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:01:19.065414  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:01:19.065831  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:00:29 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:01:19.065855  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:01:19.065982  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:01:19.066148  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:01:19.066284  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:01:19.066427  583993 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:01:19.167266  583993 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1008 19:01:19.229814  583993 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1008 19:01:19.266870  583993 main.go:141] libmachine: Stopping "default-k8s-diff-port-142496"...
	I1008 19:01:19.266901  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:01:19.268676  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Stop
	I1008 19:01:19.272236  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 0/120
	I1008 19:01:20.273847  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 1/120
	I1008 19:01:21.275251  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 2/120
	I1008 19:01:22.276491  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 3/120
	I1008 19:01:23.277811  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 4/120
	I1008 19:01:24.279735  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 5/120
	I1008 19:01:25.281079  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 6/120
	I1008 19:01:26.282256  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 7/120
	I1008 19:01:27.283563  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 8/120
	I1008 19:01:28.284945  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 9/120
	I1008 19:01:29.287326  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 10/120
	I1008 19:01:30.288780  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 11/120
	I1008 19:01:31.290176  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 12/120
	I1008 19:01:32.291494  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 13/120
	I1008 19:01:33.292900  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 14/120
	I1008 19:01:34.294938  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 15/120
	I1008 19:01:35.296185  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 16/120
	I1008 19:01:36.297500  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 17/120
	I1008 19:01:37.298869  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 18/120
	I1008 19:01:38.300346  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 19/120
	I1008 19:01:39.302529  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 20/120
	I1008 19:01:40.304040  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 21/120
	I1008 19:01:41.305360  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 22/120
	I1008 19:01:42.306722  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 23/120
	I1008 19:01:43.308020  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 24/120
	I1008 19:01:44.310008  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 25/120
	I1008 19:01:45.311409  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 26/120
	I1008 19:01:46.312710  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 27/120
	I1008 19:01:47.314274  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 28/120
	I1008 19:01:48.315524  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 29/120
	I1008 19:01:49.317532  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 30/120
	I1008 19:01:50.319102  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 31/120
	I1008 19:01:51.320829  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 32/120
	I1008 19:01:52.322278  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 33/120
	I1008 19:01:53.323633  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 34/120
	I1008 19:01:54.325455  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 35/120
	I1008 19:01:55.327040  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 36/120
	I1008 19:01:56.328339  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 37/120
	I1008 19:01:57.329854  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 38/120
	I1008 19:01:58.331316  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 39/120
	I1008 19:01:59.333405  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 40/120
	I1008 19:02:00.334846  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 41/120
	I1008 19:02:01.336252  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 42/120
	I1008 19:02:02.337680  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 43/120
	I1008 19:02:03.339059  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 44/120
	I1008 19:02:04.340888  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 45/120
	I1008 19:02:05.342134  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 46/120
	I1008 19:02:06.343557  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 47/120
	I1008 19:02:07.344954  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 48/120
	I1008 19:02:08.346490  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 49/120
	I1008 19:02:09.348704  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 50/120
	I1008 19:02:10.350132  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 51/120
	I1008 19:02:11.351466  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 52/120
	I1008 19:02:12.352731  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 53/120
	I1008 19:02:13.354113  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 54/120
	I1008 19:02:14.356226  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 55/120
	I1008 19:02:15.358256  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 56/120
	I1008 19:02:16.359524  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 57/120
	I1008 19:02:17.360842  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 58/120
	I1008 19:02:18.362118  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 59/120
	I1008 19:02:19.364409  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 60/120
	I1008 19:02:20.365568  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 61/120
	I1008 19:02:21.366650  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 62/120
	I1008 19:02:22.367925  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 63/120
	I1008 19:02:23.369339  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 64/120
	I1008 19:02:24.371554  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 65/120
	I1008 19:02:25.372930  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 66/120
	I1008 19:02:26.374304  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 67/120
	I1008 19:02:27.375878  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 68/120
	I1008 19:02:28.377222  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 69/120
	I1008 19:02:29.379303  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 70/120
	I1008 19:02:30.380479  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 71/120
	I1008 19:02:31.381754  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 72/120
	I1008 19:02:32.383157  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 73/120
	I1008 19:02:33.384499  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 74/120
	I1008 19:02:34.386564  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 75/120
	I1008 19:02:35.387855  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 76/120
	I1008 19:02:36.389199  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 77/120
	I1008 19:02:37.390688  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 78/120
	I1008 19:02:38.392782  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 79/120
	I1008 19:02:39.394725  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 80/120
	I1008 19:02:40.396619  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 81/120
	I1008 19:02:41.397859  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 82/120
	I1008 19:02:42.399192  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 83/120
	I1008 19:02:43.400308  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 84/120
	I1008 19:02:44.402047  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 85/120
	I1008 19:02:45.403487  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 86/120
	I1008 19:02:46.404722  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 87/120
	I1008 19:02:47.406112  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 88/120
	I1008 19:02:48.407382  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 89/120
	I1008 19:02:49.409712  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 90/120
	I1008 19:02:50.411424  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 91/120
	I1008 19:02:51.412697  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 92/120
	I1008 19:02:52.413858  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 93/120
	I1008 19:02:53.415192  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 94/120
	I1008 19:02:54.416830  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 95/120
	I1008 19:02:55.418401  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 96/120
	I1008 19:02:56.419738  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 97/120
	I1008 19:02:57.421208  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 98/120
	I1008 19:02:58.422570  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 99/120
	I1008 19:02:59.423819  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 100/120
	I1008 19:03:00.425105  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 101/120
	I1008 19:03:01.426708  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 102/120
	I1008 19:03:02.427995  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 103/120
	I1008 19:03:03.429664  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 104/120
	I1008 19:03:04.431454  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 105/120
	I1008 19:03:05.432764  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 106/120
	I1008 19:03:06.433977  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 107/120
	I1008 19:03:07.435211  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 108/120
	I1008 19:03:08.436395  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 109/120
	I1008 19:03:09.438340  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 110/120
	I1008 19:03:10.439587  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 111/120
	I1008 19:03:11.440792  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 112/120
	I1008 19:03:12.442048  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 113/120
	I1008 19:03:13.443301  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 114/120
	I1008 19:03:14.445050  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 115/120
	I1008 19:03:15.446554  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 116/120
	I1008 19:03:16.447727  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 117/120
	I1008 19:03:17.449007  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 118/120
	I1008 19:03:18.450424  583993 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for machine to stop 119/120
	I1008 19:03:19.451725  583993 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1008 19:03:19.451809  583993 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1008 19:03:19.453438  583993 out.go:201] 
	W1008 19:03:19.454636  583993 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1008 19:03:19.454660  583993 out.go:270] * 
	* 
	W1008 19:03:19.458259  583993 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:03:19.459730  583993 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-142496 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496: exit status 3 (18.504630346s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:03:37.966673  584777 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E1008 19:03:37.966696  584777 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-142496" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632
E1008 19:02:14.836956  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632: exit status 3 (3.167737964s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:02:15.118667  584234 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host
	E1008 19:02:15.118699  584234 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-966632 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-966632 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152752151s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-966632 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632: exit status 3 (3.062996818s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:02:24.334762  584323 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host
	E1008 19:02:24.334786  584323 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.141:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-966632" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-256554 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-256554 create -f testdata/busybox.yaml: exit status 1 (43.971429ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-256554" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-256554 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 6 (238.419066ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:02:31.684178  584470 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-256554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 6 (216.057893ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:02:31.900573  584500 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-256554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-256554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-256554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m44.962447317s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-256554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-256554 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-256554 describe deploy/metrics-server -n kube-system: exit status 1 (44.366019ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-256554" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-256554 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 6 (221.384654ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:04:17.128769  585239 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-256554" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146: exit status 3 (3.168057835s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:03:36.782714  584855 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host
	E1008 19:03:36.782740  584855 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-783146 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-783146 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152632586s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-783146 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146: exit status 3 (3.062658328s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:03:45.998705  584968 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host
	E1008 19:03:45.998728  584968 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-783146" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496: exit status 3 (3.167905583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:03:41.134701  584904 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E1008 19:03:41.134725  584904 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142496 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142496 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153023578s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142496 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496: exit status 3 (3.062636241s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 19:03:50.350701  585050 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E1008 19:03:50.350724  585050 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-142496" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (710.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-256554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1008 19:05:51.766479  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:06:38.896097  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:08:01.965175  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:10:51.764662  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:11:38.896317  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-256554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m46.889548313s)

                                                
                                                
-- stdout --
	* [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	* 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	* 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-256554 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (245.256628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-256554 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-256554 logs -n 25: (1.549343504s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-038693 sudo                            | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                 | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:04:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:20.270613  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:23.342576  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:04:29.422638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:32.494606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:38.574600  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:41.646592  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:47.726606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:50.798595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:56.878669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:59.950607  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:06.030583  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:09.102584  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:15.182571  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:18.254590  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:24.334638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:27.406606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:33.486619  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:36.558552  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:42.638565  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:45.710610  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:51.790561  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:54.862591  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:00.942606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:04.014669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:10.094618  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:13.166598  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:19.246573  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:22.318595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:28.398732  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:31.470685  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:37.550574  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:40.622614  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:46.702620  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:49.774581  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:55.854627  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:58.926568  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:07:01.929445  585014 start.go:364] duration metric: took 3m15.782086174s to acquireMachinesLock for "embed-certs-783146"
	I1008 19:07:01.929517  585014 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:01.929523  585014 fix.go:54] fixHost starting: 
	I1008 19:07:01.929889  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:01.929945  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:01.945409  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 19:07:01.945858  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:01.946357  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:01.946387  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:01.946744  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:01.946895  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:01.947028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:01.948399  585014 fix.go:112] recreateIfNeeded on embed-certs-783146: state=Stopped err=<nil>
	I1008 19:07:01.948419  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	W1008 19:07:01.948545  585014 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:01.954020  585014 out.go:177] * Restarting existing kvm2 VM for "embed-certs-783146" ...
	I1008 19:07:01.926825  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:01.926871  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927219  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:07:01.927270  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927475  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:07:01.929278  584371 machine.go:96] duration metric: took 4m37.425232924s to provisionDockerMachine
	I1008 19:07:01.929341  584371 fix.go:56] duration metric: took 4m37.445578307s for fixHost
	I1008 19:07:01.929349  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 4m37.445609603s
	W1008 19:07:01.929369  584371 start.go:714] error starting host: provision: host is not running
	W1008 19:07:01.929510  584371 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1008 19:07:01.929524  584371 start.go:729] Will try again in 5 seconds ...
	I1008 19:07:01.955309  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Start
	I1008 19:07:01.955452  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 19:07:01.956122  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 19:07:01.956432  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 19:07:01.956743  585014 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 19:07:01.957427  585014 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 19:07:03.159229  585014 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 19:07:03.160116  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.160503  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.160565  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.160497  585935 retry.go:31] will retry after 282.873854ms: waiting for machine to come up
	I1008 19:07:03.445297  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.445810  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.445838  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.445740  585935 retry.go:31] will retry after 344.936527ms: waiting for machine to come up
	I1008 19:07:03.792413  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.792802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.792837  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.792741  585935 retry.go:31] will retry after 414.968289ms: waiting for machine to come up
	I1008 19:07:04.209200  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.209532  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.209555  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.209502  585935 retry.go:31] will retry after 403.180416ms: waiting for machine to come up
	I1008 19:07:04.614156  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.614679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.614713  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.614636  585935 retry.go:31] will retry after 631.841511ms: waiting for machine to come up
	I1008 19:07:05.248574  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.248983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.249015  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.248917  585935 retry.go:31] will retry after 639.776909ms: waiting for machine to come up
	I1008 19:07:05.890868  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.891332  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.891406  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.891329  585935 retry.go:31] will retry after 764.489176ms: waiting for machine to come up
	I1008 19:07:06.931497  584371 start.go:360] acquireMachinesLock for no-preload-966632: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:06.657130  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:06.657520  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:06.657550  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:06.657462  585935 retry.go:31] will retry after 1.348973281s: waiting for machine to come up
	I1008 19:07:08.008293  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:08.008779  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:08.008805  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:08.008740  585935 retry.go:31] will retry after 1.146283289s: waiting for machine to come up
	I1008 19:07:09.157106  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:09.157517  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:09.157546  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:09.157493  585935 retry.go:31] will retry after 1.510430686s: waiting for machine to come up
	I1008 19:07:10.669393  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:10.669802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:10.669831  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:10.669749  585935 retry.go:31] will retry after 2.380864418s: waiting for machine to come up
	I1008 19:07:13.053078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:13.053487  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:13.053512  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:13.053427  585935 retry.go:31] will retry after 2.553865951s: waiting for machine to come up
	I1008 19:07:15.610098  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:15.610501  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:15.610535  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:15.610428  585935 retry.go:31] will retry after 4.018444789s: waiting for machine to come up
	I1008 19:07:20.967039  585096 start.go:364] duration metric: took 3m30.476693248s to acquireMachinesLock for "default-k8s-diff-port-142496"
	I1008 19:07:20.967105  585096 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:20.967115  585096 fix.go:54] fixHost starting: 
	I1008 19:07:20.967619  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:20.967675  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:20.984936  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1008 19:07:20.985358  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:20.985869  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:07:20.985896  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:20.986199  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:20.986380  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:20.986520  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:07:20.987828  585096 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142496: state=Stopped err=<nil>
	I1008 19:07:20.987867  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	W1008 19:07:20.988020  585096 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:20.990029  585096 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142496" ...
	I1008 19:07:19.632076  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632468  585014 main.go:141] libmachine: (embed-certs-783146) Found IP for machine: 192.168.72.183
	I1008 19:07:19.632504  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has current primary IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632511  585014 main.go:141] libmachine: (embed-certs-783146) Reserving static IP address...
	I1008 19:07:19.632968  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.633020  585014 main.go:141] libmachine: (embed-certs-783146) DBG | skip adding static IP to network mk-embed-certs-783146 - found existing host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"}
	I1008 19:07:19.633041  585014 main.go:141] libmachine: (embed-certs-783146) Reserved static IP address: 192.168.72.183
	I1008 19:07:19.633062  585014 main.go:141] libmachine: (embed-certs-783146) Waiting for SSH to be available...
	I1008 19:07:19.633073  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Getting to WaitForSSH function...
	I1008 19:07:19.634939  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635221  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.635249  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635415  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH client type: external
	I1008 19:07:19.635453  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa (-rw-------)
	I1008 19:07:19.635496  585014 main.go:141] libmachine: (embed-certs-783146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:19.635509  585014 main.go:141] libmachine: (embed-certs-783146) DBG | About to run SSH command:
	I1008 19:07:19.635522  585014 main.go:141] libmachine: (embed-certs-783146) DBG | exit 0
	I1008 19:07:19.758276  585014 main.go:141] libmachine: (embed-certs-783146) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:19.758658  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 19:07:19.759310  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:19.761990  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.762456  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762803  585014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:07:19.763012  585014 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:19.763034  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:19.763271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.765523  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765829  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.765858  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765988  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.766159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766289  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766433  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.766589  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.766877  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.766891  585014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:19.866272  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:19.866297  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866563  585014 buildroot.go:166] provisioning hostname "embed-certs-783146"
	I1008 19:07:19.866585  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866799  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.869295  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869648  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.869679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869836  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.870017  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870153  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870293  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.870444  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.870621  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.870636  585014 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-783146 && echo "embed-certs-783146" | sudo tee /etc/hostname
	I1008 19:07:19.983892  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-783146
	
	I1008 19:07:19.983925  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.986430  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986776  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.986806  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986922  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.987104  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.987588  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.987746  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.987762  585014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-783146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-783146/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-783146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:20.095178  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:20.095212  585014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:20.095264  585014 buildroot.go:174] setting up certificates
	I1008 19:07:20.095276  585014 provision.go:84] configureAuth start
	I1008 19:07:20.095288  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:20.095578  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.098000  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098431  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.098459  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098591  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.100935  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101241  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.101271  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101393  585014 provision.go:143] copyHostCerts
	I1008 19:07:20.101452  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:20.101463  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:20.101544  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:20.101807  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:20.101824  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:20.101873  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:20.102015  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:20.102029  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:20.102075  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:20.102152  585014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-783146 san=[127.0.0.1 192.168.72.183 embed-certs-783146 localhost minikube]
	I1008 19:07:20.378020  585014 provision.go:177] copyRemoteCerts
	I1008 19:07:20.378093  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:20.378133  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.380678  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381017  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.381050  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381175  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.381386  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.381579  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.381717  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.464627  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:20.487853  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:07:20.510174  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:07:20.532381  585014 provision.go:87] duration metric: took 437.094502ms to configureAuth
	I1008 19:07:20.532405  585014 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:20.532571  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:20.532669  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.535064  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.535382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.535753  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.535920  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.536039  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.536193  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.536406  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.536429  585014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:20.745937  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:20.745967  585014 machine.go:96] duration metric: took 982.940955ms to provisionDockerMachine
	I1008 19:07:20.745980  585014 start.go:293] postStartSetup for "embed-certs-783146" (driver="kvm2")
	I1008 19:07:20.745994  585014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:20.746012  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.746380  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:20.746417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.749056  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749395  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.749425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749566  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.749738  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.749852  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.749943  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.828580  585014 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:20.832894  585014 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:20.832923  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:20.832994  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:20.833069  585014 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:20.833162  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:20.842230  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:20.864957  585014 start.go:296] duration metric: took 118.964089ms for postStartSetup
	I1008 19:07:20.865006  585014 fix.go:56] duration metric: took 18.93548189s for fixHost
	I1008 19:07:20.865029  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.867709  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868089  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.868113  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868223  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.868425  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868583  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868742  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.868926  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.869159  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.869175  585014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:20.966898  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414440.940275348
	
	I1008 19:07:20.966919  585014 fix.go:216] guest clock: 1728414440.940275348
	I1008 19:07:20.966926  585014 fix.go:229] Guest: 2024-10-08 19:07:20.940275348 +0000 UTC Remote: 2024-10-08 19:07:20.865011917 +0000 UTC m=+214.857488447 (delta=75.263431ms)
	I1008 19:07:20.966948  585014 fix.go:200] guest clock delta is within tolerance: 75.263431ms
	I1008 19:07:20.966953  585014 start.go:83] releasing machines lock for "embed-certs-783146", held for 19.037463535s
	I1008 19:07:20.966979  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.967246  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.969983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.970386  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970586  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971061  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971243  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971340  585014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:20.971382  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.971487  585014 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:20.971515  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.974211  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974581  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974632  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.974695  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974872  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.974999  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.975024  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.975028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975184  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975228  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.975374  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975501  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.975559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975709  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:21.072152  585014 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:21.078116  585014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:21.221176  585014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:21.227359  585014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:21.227434  585014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:21.242691  585014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:21.242716  585014 start.go:495] detecting cgroup driver to use...
	I1008 19:07:21.242796  585014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:21.257429  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:21.270208  585014 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:21.270258  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:21.282826  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:21.295827  585014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:21.405804  585014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:21.572147  585014 docker.go:233] disabling docker service ...
	I1008 19:07:21.572231  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:21.586083  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:21.598657  585014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:21.722224  585014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:21.853317  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:21.867234  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:21.884872  585014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:21.884949  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.895154  585014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:21.895223  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.905371  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.915602  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.926026  585014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:21.938089  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.949261  585014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.966211  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.978120  585014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:21.987631  585014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:21.987693  585014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:22.002185  585014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:22.013111  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:22.135933  585014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:22.230256  585014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:22.230342  585014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:22.235005  585014 start.go:563] Will wait 60s for crictl version
	I1008 19:07:22.235076  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:07:22.238991  585014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:22.279302  585014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:22.279391  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.308343  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.337272  585014 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:20.991759  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Start
	I1008 19:07:20.991997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring networks are active...
	I1008 19:07:20.992703  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network default is active
	I1008 19:07:20.993057  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network mk-default-k8s-diff-port-142496 is active
	I1008 19:07:20.993435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Getting domain xml...
	I1008 19:07:20.994209  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Creating domain...
	I1008 19:07:22.240185  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting to get IP...
	I1008 19:07:22.240949  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241417  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241469  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.241382  586083 retry.go:31] will retry after 234.248435ms: waiting for machine to come up
	I1008 19:07:22.476800  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477343  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477375  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.477275  586083 retry.go:31] will retry after 323.851452ms: waiting for machine to come up
	I1008 19:07:22.802997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803574  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803610  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.803516  586083 retry.go:31] will retry after 445.299956ms: waiting for machine to come up
	I1008 19:07:23.250211  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250686  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250715  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.250651  586083 retry.go:31] will retry after 574.786836ms: waiting for machine to come up
	I1008 19:07:23.827535  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828010  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.827959  586083 retry.go:31] will retry after 563.165045ms: waiting for machine to come up
	I1008 19:07:24.393150  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393741  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393792  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.393717  586083 retry.go:31] will retry after 576.443855ms: waiting for machine to come up
	I1008 19:07:24.971698  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972132  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972161  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.972090  586083 retry.go:31] will retry after 999.17904ms: waiting for machine to come up
	I1008 19:07:22.338812  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:22.341998  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:22.342417  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342680  585014 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:22.346863  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:22.359456  585014 kubeadm.go:883] updating cluster {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:22.359630  585014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:22.359692  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:22.394832  585014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:22.394893  585014 ssh_runner.go:195] Run: which lz4
	I1008 19:07:22.398935  585014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:22.403100  585014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:22.403127  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:23.771685  585014 crio.go:462] duration metric: took 1.372780034s to copy over tarball
	I1008 19:07:23.771769  585014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:25.816508  585014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044704362s)
	I1008 19:07:25.816547  585014 crio.go:469] duration metric: took 2.04482777s to extract the tarball
	I1008 19:07:25.816557  585014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:25.852980  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:25.893366  585014 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:25.893391  585014 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:25.893399  585014 kubeadm.go:934] updating node { 192.168.72.183 8443 v1.31.1 crio true true} ...
	I1008 19:07:25.893517  585014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-783146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:25.893579  585014 ssh_runner.go:195] Run: crio config
	I1008 19:07:25.934828  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:25.934850  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:25.934874  585014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:25.934906  585014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.183 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-783146 NodeName:embed-certs-783146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:25.935039  585014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-783146"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:25.935106  585014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:25.944851  585014 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:25.944919  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:25.954022  585014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1008 19:07:25.979675  585014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:26.001147  585014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1008 19:07:26.017613  585014 ssh_runner.go:195] Run: grep 192.168.72.183	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:26.021401  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:26.033347  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:25.972405  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972868  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972891  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:25.972831  586083 retry.go:31] will retry after 1.186801161s: waiting for machine to come up
	I1008 19:07:27.161319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161877  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161900  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:27.161823  586083 retry.go:31] will retry after 1.448383195s: waiting for machine to come up
	I1008 19:07:28.611319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611697  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:28.611613  586083 retry.go:31] will retry after 1.738948191s: waiting for machine to come up
	I1008 19:07:30.352081  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352582  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352617  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:30.352530  586083 retry.go:31] will retry after 2.624799898s: waiting for machine to come up
	I1008 19:07:26.138298  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:26.154419  585014 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146 for IP: 192.168.72.183
	I1008 19:07:26.154447  585014 certs.go:194] generating shared ca certs ...
	I1008 19:07:26.154470  585014 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:26.154651  585014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:26.154714  585014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:26.154729  585014 certs.go:256] generating profile certs ...
	I1008 19:07:26.154860  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/client.key
	I1008 19:07:26.154948  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key.b07aac04
	I1008 19:07:26.155003  585014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key
	I1008 19:07:26.155159  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:26.155202  585014 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:26.155212  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:26.155232  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:26.155256  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:26.155280  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:26.155319  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:26.156076  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:26.187225  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:26.235804  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:26.268034  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:26.292729  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 19:07:26.320118  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:26.351058  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:26.374004  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:07:26.396526  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:26.419067  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:26.441449  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:26.463768  585014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:26.479471  585014 ssh_runner.go:195] Run: openssl version
	I1008 19:07:26.484957  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:26.495286  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501166  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501225  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.507154  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:26.517587  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:26.528157  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532896  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532967  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.540724  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:26.554952  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:26.567160  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571304  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571394  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.576974  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:26.587198  585014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:26.591621  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:26.597176  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:26.602766  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:26.608373  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:26.613797  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:26.619310  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:26.624702  585014 kubeadm.go:392] StartCluster: {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:26.624831  585014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:26.624878  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.666183  585014 cri.go:89] found id: ""
	I1008 19:07:26.666253  585014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:26.676621  585014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:26.676644  585014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:26.676699  585014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:26.686549  585014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:26.687532  585014 kubeconfig.go:125] found "embed-certs-783146" server: "https://192.168.72.183:8443"
	I1008 19:07:26.689545  585014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:26.698758  585014 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.183
	I1008 19:07:26.698790  585014 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:26.698804  585014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:26.698856  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.738148  585014 cri.go:89] found id: ""
	I1008 19:07:26.738209  585014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:26.753980  585014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:26.763186  585014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:26.763208  585014 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:26.763257  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:07:26.771789  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:26.771847  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:26.780812  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:07:26.789329  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:26.789390  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:26.798230  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.806781  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:26.806842  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.815549  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:07:26.823782  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:26.823830  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:26.832698  585014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:26.841687  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:26.945569  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.159232  585014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213619978s)
	I1008 19:07:28.159280  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.372727  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.456082  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.567486  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:28.567627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.067909  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.568466  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.068627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.567821  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.604366  585014 api_server.go:72] duration metric: took 2.036885191s to wait for apiserver process to appear ...
	I1008 19:07:30.604403  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:30.604440  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.461223  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.461270  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.461286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.499425  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.499473  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.604563  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.614594  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:33.614625  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.105286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.111706  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:34.111747  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.605326  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.612912  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:07:34.619204  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:34.619227  585014 api_server.go:131] duration metric: took 4.014816798s to wait for apiserver health ...
	I1008 19:07:34.619236  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:34.619242  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:34.621043  585014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:32.980593  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981141  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981171  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:32.981076  586083 retry.go:31] will retry after 3.401015855s: waiting for machine to come up
	I1008 19:07:34.622500  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:34.632627  585014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:34.654975  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:34.667824  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:34.667853  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:34.667863  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:34.667874  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:34.667879  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:34.667884  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:34.667890  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:34.667899  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:34.667904  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:34.667910  585014 system_pods.go:74] duration metric: took 12.913884ms to wait for pod list to return data ...
	I1008 19:07:34.667919  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:34.672996  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:34.673018  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:34.673029  585014 node_conditions.go:105] duration metric: took 5.105827ms to run NodePressure ...
	I1008 19:07:34.673045  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:34.992309  585014 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996835  585014 kubeadm.go:739] kubelet initialised
	I1008 19:07:34.996861  585014 kubeadm.go:740] duration metric: took 4.524726ms waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996870  585014 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:35.005255  585014 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.012539  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012568  585014 pod_ready.go:82] duration metric: took 7.278613ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.012580  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012589  585014 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.018465  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018489  585014 pod_ready.go:82] duration metric: took 5.8848ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.018500  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018509  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.026503  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026533  585014 pod_ready.go:82] duration metric: took 8.012156ms for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.026544  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026555  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.058419  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058449  585014 pod_ready.go:82] duration metric: took 31.879605ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.058463  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058471  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.458244  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458275  585014 pod_ready.go:82] duration metric: took 399.794285ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.458286  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458292  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.858567  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858612  585014 pod_ready.go:82] duration metric: took 400.312425ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.858625  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858637  585014 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:36.258490  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258520  585014 pod_ready.go:82] duration metric: took 399.870797ms for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:36.258530  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258538  585014 pod_ready.go:39] duration metric: took 1.261659261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:36.258558  585014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:07:36.269993  585014 ops.go:34] apiserver oom_adj: -16
	I1008 19:07:36.270016  585014 kubeadm.go:597] duration metric: took 9.593365367s to restartPrimaryControlPlane
	I1008 19:07:36.270025  585014 kubeadm.go:394] duration metric: took 9.645330227s to StartCluster
	I1008 19:07:36.270044  585014 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.270125  585014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:07:36.271682  585014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.271945  585014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:07:36.272024  585014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:07:36.272130  585014 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-783146"
	I1008 19:07:36.272158  585014 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-783146"
	W1008 19:07:36.272166  585014 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:07:36.272152  585014 addons.go:69] Setting default-storageclass=true in profile "embed-certs-783146"
	I1008 19:07:36.272179  585014 addons.go:69] Setting metrics-server=true in profile "embed-certs-783146"
	I1008 19:07:36.272198  585014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-783146"
	I1008 19:07:36.272203  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272213  585014 addons.go:234] Setting addon metrics-server=true in "embed-certs-783146"
	W1008 19:07:36.272224  585014 addons.go:243] addon metrics-server should already be in state true
	I1008 19:07:36.272256  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272187  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:36.272616  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272638  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272658  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272689  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272694  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272738  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.274263  585014 out.go:177] * Verifying Kubernetes components...
	I1008 19:07:36.275444  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:36.288219  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1008 19:07:36.288686  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.289297  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.289328  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.289721  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.290415  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.290462  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.293043  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1008 19:07:36.293374  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I1008 19:07:36.293461  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293721  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293954  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.293978  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294188  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.294212  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294299  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294504  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.294534  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294982  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.295028  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.297638  585014 addons.go:234] Setting addon default-storageclass=true in "embed-certs-783146"
	W1008 19:07:36.297661  585014 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:07:36.297692  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.298042  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.298081  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.309286  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1008 19:07:36.309776  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310024  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1008 19:07:36.310337  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310360  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.310478  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310771  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.310980  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310997  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.311013  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.311330  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.311500  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.313004  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1008 19:07:36.313159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313368  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.313523  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313926  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.313951  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.314284  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.314777  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.314820  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.314992  585014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:07:36.315010  585014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:07:36.316168  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:07:36.316191  585014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:07:36.316212  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.316309  585014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.316333  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:07:36.316352  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.320088  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320418  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320566  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320591  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320733  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.320888  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320912  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320931  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321074  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.321181  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321235  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321400  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321397  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.321532  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.331532  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1008 19:07:36.331881  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.332309  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.332331  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.332724  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.332929  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.334589  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.334775  585014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.334797  585014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:07:36.334811  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.337675  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.338093  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338209  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.338380  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.338491  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.338600  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.444532  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:36.462719  585014 node_ready.go:35] waiting up to 6m0s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:36.519485  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.613714  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:07:36.613738  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:07:36.637773  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.645883  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:07:36.645907  585014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:07:36.685924  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.685952  585014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:07:36.710461  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.970231  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970256  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970563  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970589  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970599  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970606  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970860  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970881  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970892  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980520  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.980538  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.980826  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980869  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.980888  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.676577  585014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038767196s)
	I1008 19:07:37.676633  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.676646  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.676972  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.676982  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677040  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677058  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.677075  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.677333  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677351  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677375  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689600  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689615  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.689883  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.689897  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689901  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.689917  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689934  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.690210  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.690227  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.690240  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.690256  585014 addons.go:475] Verifying addon metrics-server=true in "embed-certs-783146"
	I1008 19:07:37.692035  585014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1008 19:07:36.383659  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.383993  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.384026  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:36.383939  586083 retry.go:31] will retry after 3.325274435s: waiting for machine to come up
	I1008 19:07:39.713420  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.713902  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Found IP for machine: 192.168.50.213
	I1008 19:07:39.713926  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserving static IP address...
	I1008 19:07:39.713945  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has current primary IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.714332  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.714362  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserved static IP address: 192.168.50.213
	I1008 19:07:39.714382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | skip adding static IP to network mk-default-k8s-diff-port-142496 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"}
	I1008 19:07:39.714401  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Getting to WaitForSSH function...
	I1008 19:07:39.714415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for SSH to be available...
	I1008 19:07:39.716542  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.716905  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.716951  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.717025  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH client type: external
	I1008 19:07:39.717052  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa (-rw-------)
	I1008 19:07:39.717111  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:39.717147  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | About to run SSH command:
	I1008 19:07:39.717165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | exit 0
	I1008 19:07:39.842089  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:39.842499  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetConfigRaw
	I1008 19:07:39.843125  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:39.845604  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.845976  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.846008  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.846276  585096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:07:39.846509  585096 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:39.846541  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:39.846768  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.849107  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849411  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.849435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849743  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.849924  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850084  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850236  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.850422  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.850679  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.850695  585096 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:39.950481  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:39.950507  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.950796  585096 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142496"
	I1008 19:07:39.950825  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.951016  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.953300  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.953678  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953833  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.954002  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954168  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954297  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.954450  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.954621  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.954636  585096 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142496 && echo "default-k8s-diff-port-142496" | sudo tee /etc/hostname
	I1008 19:07:40.068848  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142496
	
	I1008 19:07:40.068876  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.071855  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072195  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.072226  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072392  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.072563  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072746  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072871  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.073039  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.073237  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.073257  585096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:40.183039  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:40.183073  585096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:40.183116  585096 buildroot.go:174] setting up certificates
	I1008 19:07:40.183131  585096 provision.go:84] configureAuth start
	I1008 19:07:40.183146  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:40.183451  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:40.185904  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186264  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.186284  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186453  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.188672  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.189037  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189134  585096 provision.go:143] copyHostCerts
	I1008 19:07:40.189204  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:40.189217  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:40.189281  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:40.189427  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:40.189441  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:40.189474  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:40.189563  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:40.189573  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:40.189600  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:40.189679  585096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142496 san=[127.0.0.1 192.168.50.213 default-k8s-diff-port-142496 localhost minikube]
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:37.693230  585014 addons.go:510] duration metric: took 1.421218857s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1008 19:07:38.466754  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:40.967492  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:40.418969  585096 provision.go:177] copyRemoteCerts
	I1008 19:07:40.419032  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:40.419060  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.421382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421701  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.421730  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421912  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.422108  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.422287  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.422426  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.500533  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:40.524199  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 19:07:40.547495  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:07:40.570656  585096 provision.go:87] duration metric: took 387.509086ms to configureAuth
	I1008 19:07:40.570687  585096 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:40.570859  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:40.570934  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.573578  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.573941  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.573970  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.574088  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.574290  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574534  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574680  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.574881  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.575056  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.575074  585096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:40.795575  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:40.795604  585096 machine.go:96] duration metric: took 949.073836ms to provisionDockerMachine
	I1008 19:07:40.795618  585096 start.go:293] postStartSetup for "default-k8s-diff-port-142496" (driver="kvm2")
	I1008 19:07:40.795629  585096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:40.795646  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:40.796003  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:40.796042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.798307  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798635  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.798666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798881  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.799039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.799249  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.799369  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.880470  585096 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:40.884632  585096 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:40.884660  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:40.884719  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:40.884834  585096 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:40.884947  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:40.893828  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:40.917278  585096 start.go:296] duration metric: took 121.644332ms for postStartSetup
	I1008 19:07:40.917320  585096 fix.go:56] duration metric: took 19.950206082s for fixHost
	I1008 19:07:40.917342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.919971  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920315  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.920342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920539  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.920782  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.920969  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.921114  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.921292  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.921519  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.921535  585096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:41.022573  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414460.977520721
	
	I1008 19:07:41.022596  585096 fix.go:216] guest clock: 1728414460.977520721
	I1008 19:07:41.022603  585096 fix.go:229] Guest: 2024-10-08 19:07:40.977520721 +0000 UTC Remote: 2024-10-08 19:07:40.917324605 +0000 UTC m=+230.557951471 (delta=60.196116ms)
	I1008 19:07:41.022627  585096 fix.go:200] guest clock delta is within tolerance: 60.196116ms
	I1008 19:07:41.022634  585096 start.go:83] releasing machines lock for "default-k8s-diff-port-142496", held for 20.055558507s
	I1008 19:07:41.022665  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.022896  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:41.025861  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026272  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.026301  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026479  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027126  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027537  585096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:41.027581  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.027725  585096 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:41.027749  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.030474  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.030745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031094  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031123  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031148  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031322  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031511  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031572  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031827  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.031883  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.135368  585096 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:41.141492  585096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:41.288617  585096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:41.295482  585096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:41.295550  585096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:41.310709  585096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:41.310738  585096 start.go:495] detecting cgroup driver to use...
	I1008 19:07:41.310821  585096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:41.328574  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:41.342506  585096 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:41.342564  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:41.356308  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:41.372510  585096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:41.497084  585096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:41.665187  585096 docker.go:233] disabling docker service ...
	I1008 19:07:41.665272  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:41.682309  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:41.702567  585096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:41.882727  585096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:42.006479  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:42.020474  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:42.039750  585096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:42.039834  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.050395  585096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:42.050449  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.060572  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.071974  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.083208  585096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:42.097166  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.110090  585096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.128424  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.139296  585096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:42.148278  585096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:42.148320  585096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:42.164007  585096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:42.173218  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:42.303890  585096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:42.412074  585096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:42.412155  585096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:42.418606  585096 start.go:563] Will wait 60s for crictl version
	I1008 19:07:42.418662  585096 ssh_runner.go:195] Run: which crictl
	I1008 19:07:42.422670  585096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:42.469322  585096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:42.469432  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.501089  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.530412  585096 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:42.531554  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:42.534587  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.534928  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:42.534968  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.535235  585096 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:42.539279  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:42.552259  585096 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:42.552380  585096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:42.552447  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:42.588849  585096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:42.588928  585096 ssh_runner.go:195] Run: which lz4
	I1008 19:07:42.592785  585096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:42.597089  585096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:42.597119  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:44.003959  585096 crio.go:462] duration metric: took 1.411213503s to copy over tarball
	I1008 19:07:44.004075  585096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:43.467315  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:43.975147  585014 node_ready.go:49] node "embed-certs-783146" has status "Ready":"True"
	I1008 19:07:43.975180  585014 node_ready.go:38] duration metric: took 7.512429362s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:43.975194  585014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:43.982537  585014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999539  585014 pod_ready.go:93] pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:43.999566  585014 pod_ready.go:82] duration metric: took 16.995034ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999578  585014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506007  585014 pod_ready.go:93] pod "etcd-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:44.506032  585014 pod_ready.go:82] duration metric: took 506.447262ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506043  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.159478  585096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155358377s)
	I1008 19:07:46.159512  585096 crio.go:469] duration metric: took 2.155494994s to extract the tarball
	I1008 19:07:46.159532  585096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:46.196073  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:46.239224  585096 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:46.239253  585096 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:46.239263  585096 kubeadm.go:934] updating node { 192.168.50.213 8444 v1.31.1 crio true true} ...
	I1008 19:07:46.239412  585096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:46.239482  585096 ssh_runner.go:195] Run: crio config
	I1008 19:07:46.284916  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:46.284941  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:46.284959  585096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:46.284980  585096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142496 NodeName:default-k8s-diff-port-142496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:46.285145  585096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142496"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:46.285218  585096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:46.295176  585096 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:46.295278  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:46.304340  585096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1008 19:07:46.320234  585096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:46.336215  585096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 19:07:46.352435  585096 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:46.355991  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:46.367424  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:46.491070  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:46.509165  585096 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496 for IP: 192.168.50.213
	I1008 19:07:46.509192  585096 certs.go:194] generating shared ca certs ...
	I1008 19:07:46.509213  585096 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:46.509413  585096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:46.509488  585096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:46.509507  585096 certs.go:256] generating profile certs ...
	I1008 19:07:46.509642  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/client.key
	I1008 19:07:46.509724  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key.8b79a92b
	I1008 19:07:46.509806  585096 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key
	I1008 19:07:46.510014  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:46.510069  585096 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:46.510082  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:46.510109  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:46.510154  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:46.510177  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:46.510220  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:46.510965  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:46.548979  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:46.588042  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:46.617201  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:46.645499  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 19:07:46.673075  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:46.705336  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:46.727739  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:07:46.755352  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:46.782421  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:46.804813  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:46.827321  585096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:46.843375  585096 ssh_runner.go:195] Run: openssl version
	I1008 19:07:46.848936  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:46.860851  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865320  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865379  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.871107  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:46.881518  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:46.891868  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.895991  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.896026  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.901219  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:46.914282  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:46.925095  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929407  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929465  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.934778  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:46.946807  585096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:46.951173  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:46.957072  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:46.962822  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:46.968584  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:46.974679  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:46.980081  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:46.985537  585096 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:46.985659  585096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:46.985706  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.025838  585096 cri.go:89] found id: ""
	I1008 19:07:47.025924  585096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:47.037778  585096 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:47.037800  585096 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:47.037847  585096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:47.049787  585096 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:47.050778  585096 kubeconfig.go:125] found "default-k8s-diff-port-142496" server: "https://192.168.50.213:8444"
	I1008 19:07:47.052921  585096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:47.062696  585096 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I1008 19:07:47.062747  585096 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:47.062775  585096 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:47.062822  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.101981  585096 cri.go:89] found id: ""
	I1008 19:07:47.102054  585096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:47.119421  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:47.129168  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:47.129189  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:47.129253  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:07:47.138071  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:47.138125  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:47.147202  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:07:47.155923  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:47.155979  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:47.164829  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.173366  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:47.173413  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.182417  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:07:47.191170  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:47.191228  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:47.200115  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:47.209146  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:47.314572  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.318198  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003546788s)
	I1008 19:07:48.318245  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.533505  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.617977  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.743670  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:48.743782  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.244765  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.744287  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.243920  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:46.513648  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:49.013409  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:50.422334  585014 pod_ready.go:93] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.422364  585014 pod_ready.go:82] duration metric: took 5.916314463s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.422379  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929739  585014 pod_ready.go:93] pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.929775  585014 pod_ready.go:82] duration metric: took 507.386631ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929790  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935612  585014 pod_ready.go:93] pod "kube-proxy-9l7t7" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.935638  585014 pod_ready.go:82] duration metric: took 5.84081ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935650  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941106  585014 pod_ready.go:93] pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.941131  585014 pod_ready.go:82] duration metric: took 5.47259ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941143  585014 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:50.744524  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.830124  585096 api_server.go:72] duration metric: took 2.086461343s to wait for apiserver process to appear ...
	I1008 19:07:50.830161  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:50.830192  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:50.830915  585096 api_server.go:269] stopped: https://192.168.50.213:8444/healthz: Get "https://192.168.50.213:8444/healthz": dial tcp 192.168.50.213:8444: connect: connection refused
	I1008 19:07:51.331031  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.027442  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.027468  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.027483  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.101043  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.101073  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.330385  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.335009  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.335035  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:54.830407  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.835912  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.835939  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:55.330454  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:55.336271  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:07:55.343556  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:55.343586  585096 api_server.go:131] duration metric: took 4.513416619s to wait for apiserver health ...
	I1008 19:07:55.343604  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:55.343612  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:55.345259  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:55.346612  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:55.357899  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:55.383903  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:52.948407  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:55.449059  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:07:55.397903  585096 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:55.397935  585096 system_pods.go:61] "coredns-7c65d6cfc9-tkg8j" [0b436a1f-2b8e-4a5f-8063-695480275f2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:55.397944  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [cc702ae5-7e74-4a18-942e-1d236d39c43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:55.397952  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [da72d2f3-aab5-42c3-9733-7c0ce470e61e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:55.397959  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [de964717-b4de-4c7c-a9b5-164e7a048d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:55.397966  585096 system_pods.go:61] "kube-proxy-lwggr" [d5d96599-c3d3-4eba-a2ad-0c027e8ef1ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:55.397971  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [9218d69d-97ca-4680-856b-95c43fa371ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:55.397976  585096 system_pods.go:61] "metrics-server-6867b74b74-pfc2c" [9bafd6da-a33e-4182-a0d7-5e4c9473f057] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:55.397982  585096 system_pods.go:61] "storage-provisioner" [b60980ab-2552-404e-b351-4b163a075732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:55.397988  585096 system_pods.go:74] duration metric: took 14.056648ms to wait for pod list to return data ...
	I1008 19:07:55.397997  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:55.403870  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:55.403906  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:55.403920  585096 node_conditions.go:105] duration metric: took 5.917994ms to run NodePressure ...
	I1008 19:07:55.403941  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:55.677555  585096 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682514  585096 kubeadm.go:739] kubelet initialised
	I1008 19:07:55.682539  585096 kubeadm.go:740] duration metric: took 4.953783ms waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682550  585096 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:55.688641  585096 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:57.695361  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.195582  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:57.948167  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.446946  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.834691  584371 start.go:364] duration metric: took 54.903126319s to acquireMachinesLock for "no-preload-966632"
	I1008 19:08:01.834745  584371 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:08:01.834753  584371 fix.go:54] fixHost starting: 
	I1008 19:08:01.835158  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:01.835200  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:01.854850  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1008 19:08:01.855220  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:01.855740  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:01.855770  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:01.856201  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:01.856428  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:01.856587  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:01.857921  584371 fix.go:112] recreateIfNeeded on no-preload-966632: state=Stopped err=<nil>
	I1008 19:08:01.857943  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	W1008 19:08:01.858110  584371 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:08:01.859994  584371 out.go:177] * Restarting existing kvm2 VM for "no-preload-966632" ...
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:01.861355  584371 main.go:141] libmachine: (no-preload-966632) Calling .Start
	I1008 19:08:01.861539  584371 main.go:141] libmachine: (no-preload-966632) Ensuring networks are active...
	I1008 19:08:01.862455  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network default is active
	I1008 19:08:01.862878  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network mk-no-preload-966632 is active
	I1008 19:08:01.863368  584371 main.go:141] libmachine: (no-preload-966632) Getting domain xml...
	I1008 19:08:01.864106  584371 main.go:141] libmachine: (no-preload-966632) Creating domain...
	I1008 19:08:03.179854  584371 main.go:141] libmachine: (no-preload-966632) Waiting to get IP...
	I1008 19:08:03.180838  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.181232  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.181301  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.181206  586496 retry.go:31] will retry after 229.567854ms: waiting for machine to come up
	I1008 19:08:03.412710  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.413201  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.413225  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.413170  586496 retry.go:31] will retry after 361.675143ms: waiting for machine to come up
	I1008 19:08:03.776466  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.777140  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.777184  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.777047  586496 retry.go:31] will retry after 323.194852ms: waiting for machine to come up
	I1008 19:08:04.101865  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.102357  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.102388  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.102310  586496 retry.go:31] will retry after 484.995282ms: waiting for machine to come up
	I1008 19:08:02.698935  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:05.195930  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:02.447582  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:04.450889  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:04.589063  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.589752  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.589780  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.589706  586496 retry.go:31] will retry after 543.703113ms: waiting for machine to come up
	I1008 19:08:05.135522  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.135997  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.136023  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.135944  586496 retry.go:31] will retry after 617.479763ms: waiting for machine to come up
	I1008 19:08:05.754978  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.755541  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.755568  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.755486  586496 retry.go:31] will retry after 849.017716ms: waiting for machine to come up
	I1008 19:08:06.606621  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:06.607072  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:06.607105  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:06.607023  586496 retry.go:31] will retry after 1.133489837s: waiting for machine to come up
	I1008 19:08:07.742713  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:07.743299  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:07.743329  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:07.743252  586496 retry.go:31] will retry after 1.797316795s: waiting for machine to come up
	I1008 19:08:07.196317  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.698409  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.698443  585096 pod_ready.go:82] duration metric: took 12.009772792s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.698475  585096 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.708991  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.709015  585096 pod_ready.go:82] duration metric: took 10.527401ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.709028  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714343  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.714369  585096 pod_ready.go:82] duration metric: took 5.331417ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714383  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.118973  585096 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:06.948829  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:09.448376  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:09.541879  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:09.542340  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:09.542372  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:09.542288  586496 retry.go:31] will retry after 2.238590286s: waiting for machine to come up
	I1008 19:08:11.783440  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:11.783909  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:11.783945  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:11.783858  586496 retry.go:31] will retry after 2.226110801s: waiting for machine to come up
	I1008 19:08:14.012103  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:14.012538  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:14.012561  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:14.012493  586496 retry.go:31] will retry after 2.298206633s: waiting for machine to come up
	I1008 19:08:10.849833  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.849856  585096 pod_ready.go:82] duration metric: took 3.13546554s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.849868  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858341  585096 pod_ready.go:93] pod "kube-proxy-lwggr" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.858367  585096 pod_ready.go:82] duration metric: took 8.492572ms for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858379  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865890  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.865909  585096 pod_ready.go:82] duration metric: took 7.521945ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865918  585096 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:12.873861  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:15.372408  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.450482  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:13.948331  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.313868  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:16.314460  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:16.314484  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:16.314424  586496 retry.go:31] will retry after 3.672085858s: waiting for machine to come up
	I1008 19:08:17.872689  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.372637  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.448090  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:18.947580  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.948804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.989014  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989556  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has current primary IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989576  584371 main.go:141] libmachine: (no-preload-966632) Found IP for machine: 192.168.61.141
	I1008 19:08:19.989589  584371 main.go:141] libmachine: (no-preload-966632) Reserving static IP address...
	I1008 19:08:19.990000  584371 main.go:141] libmachine: (no-preload-966632) Reserved static IP address: 192.168.61.141
	I1008 19:08:19.990036  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.990048  584371 main.go:141] libmachine: (no-preload-966632) Waiting for SSH to be available...
	I1008 19:08:19.990068  584371 main.go:141] libmachine: (no-preload-966632) DBG | skip adding static IP to network mk-no-preload-966632 - found existing host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"}
	I1008 19:08:19.990076  584371 main.go:141] libmachine: (no-preload-966632) DBG | Getting to WaitForSSH function...
	I1008 19:08:19.992644  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.992970  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.993010  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.993081  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH client type: external
	I1008 19:08:19.993104  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa (-rw-------)
	I1008 19:08:19.993136  584371 main.go:141] libmachine: (no-preload-966632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:19.993152  584371 main.go:141] libmachine: (no-preload-966632) DBG | About to run SSH command:
	I1008 19:08:19.993174  584371 main.go:141] libmachine: (no-preload-966632) DBG | exit 0
	I1008 19:08:20.118205  584371 main.go:141] libmachine: (no-preload-966632) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:20.118616  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetConfigRaw
	I1008 19:08:20.119326  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.122203  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122678  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.122708  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122926  584371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 19:08:20.123144  584371 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:20.123164  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:20.123360  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.125759  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126083  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.126108  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126265  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.126442  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.126980  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.127189  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.127201  584371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:20.234458  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:20.234491  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.234781  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:08:20.234811  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.235044  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.237673  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.237993  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.238016  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.238221  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.238418  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238612  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238806  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.238981  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.239176  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.239203  584371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-966632 && echo "no-preload-966632" | sudo tee /etc/hostname
	I1008 19:08:20.360621  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-966632
	
	I1008 19:08:20.360649  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.363600  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.363909  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.363947  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.364166  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.364297  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364426  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364510  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.364630  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.364855  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.364881  584371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-966632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-966632/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-966632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:20.483101  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:20.483131  584371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:20.483149  584371 buildroot.go:174] setting up certificates
	I1008 19:08:20.483161  584371 provision.go:84] configureAuth start
	I1008 19:08:20.483171  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.483429  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.486467  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.486838  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.486871  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.487037  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.489207  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489531  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.489557  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489655  584371 provision.go:143] copyHostCerts
	I1008 19:08:20.489726  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:20.489737  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:20.489803  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:20.489927  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:20.489939  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:20.489987  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:20.490072  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:20.490083  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:20.490110  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:20.490231  584371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.no-preload-966632 san=[127.0.0.1 192.168.61.141 localhost minikube no-preload-966632]
	I1008 19:08:20.618050  584371 provision.go:177] copyRemoteCerts
	I1008 19:08:20.618117  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:20.618149  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.621118  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621458  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.621485  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621670  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.621875  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.622056  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.622224  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:20.704439  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:20.730441  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:08:20.755072  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:08:20.777513  584371 provision.go:87] duration metric: took 294.340685ms to configureAuth
	I1008 19:08:20.777550  584371 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:20.777774  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:08:20.777873  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.780540  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.780956  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.780995  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.781185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.781423  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781615  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.781989  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.782179  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.782203  584371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:21.003896  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:21.003925  584371 machine.go:96] duration metric: took 880.766243ms to provisionDockerMachine
	I1008 19:08:21.003940  584371 start.go:293] postStartSetup for "no-preload-966632" (driver="kvm2")
	I1008 19:08:21.003955  584371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:21.003974  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.004286  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:21.004312  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.007138  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007472  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.007500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007610  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.007820  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.007991  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.008163  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.093075  584371 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:21.097048  584371 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:21.097076  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:21.097160  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:21.097254  584371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:21.097370  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:21.106698  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:21.130484  584371 start.go:296] duration metric: took 126.530716ms for postStartSetup
	I1008 19:08:21.130526  584371 fix.go:56] duration metric: took 19.295774496s for fixHost
	I1008 19:08:21.130550  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.133361  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.133717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.133744  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.134048  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.134269  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134525  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134710  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.134888  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:21.135119  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:21.135135  584371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:21.242740  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414501.194174379
	
	I1008 19:08:21.242765  584371 fix.go:216] guest clock: 1728414501.194174379
	I1008 19:08:21.242776  584371 fix.go:229] Guest: 2024-10-08 19:08:21.194174379 +0000 UTC Remote: 2024-10-08 19:08:21.130530022 +0000 UTC m=+356.786912807 (delta=63.644357ms)
	I1008 19:08:21.242823  584371 fix.go:200] guest clock delta is within tolerance: 63.644357ms
	I1008 19:08:21.242835  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 19.408108613s
	I1008 19:08:21.242857  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.243112  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:21.245967  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246378  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.246409  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246731  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247314  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247500  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247588  584371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:21.247640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.247706  584371 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:21.247731  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.250191  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250228  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250665  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250694  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250729  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250789  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250948  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250962  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251129  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251314  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.251334  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251462  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.353600  584371 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:21.360031  584371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:21.502001  584371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:21.508846  584371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:21.508938  584371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:21.524597  584371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:21.524626  584371 start.go:495] detecting cgroup driver to use...
	I1008 19:08:21.524699  584371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:21.541500  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:21.553886  584371 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:21.553943  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:21.567027  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:21.579965  584371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:21.692823  584371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:21.844393  584371 docker.go:233] disabling docker service ...
	I1008 19:08:21.844461  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:21.860471  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:21.873229  584371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:22.003106  584371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:22.129301  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:22.143314  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:22.161423  584371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:08:22.161494  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.171355  584371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:22.171429  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.180962  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.190212  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.199737  584371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:22.209488  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.219051  584371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.235430  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.245007  584371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:22.253705  584371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:22.253748  584371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:22.265343  584371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:22.275245  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:22.380960  584371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:22.471004  584371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:22.471067  584371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:22.475520  584371 start.go:563] Will wait 60s for crictl version
	I1008 19:08:22.475598  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.479271  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:22.523709  584371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:22.523787  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.551307  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.579271  584371 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:08:22.580608  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:22.583417  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583783  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:22.583825  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583991  584371 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:22.587937  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:22.600324  584371 kubeadm.go:883] updating cluster {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:22.600465  584371 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:08:22.600506  584371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:22.641111  584371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:08:22.641139  584371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:22.641194  584371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.641224  584371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.641284  584371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.641307  584371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.641377  584371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.641407  584371 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 19:08:22.641742  584371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642057  584371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.642568  584371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.642576  584371 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 19:08:22.642669  584371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.642876  584371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.642894  584371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.643310  584371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.799972  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.811504  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.815340  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.815659  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.817303  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.858380  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.864688  584371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1008 19:08:22.864727  584371 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.864762  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.877332  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1008 19:08:22.934971  584371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1008 19:08:22.935035  584371 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.935085  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945549  584371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1008 19:08:22.945594  584371 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.945644  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945645  584371 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1008 19:08:22.945683  584371 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.945685  584371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1008 19:08:22.945730  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945733  584371 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.945796  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981887  584371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1008 19:08:22.982012  584371 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.982059  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981954  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.082208  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.082210  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.082304  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.082411  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.082430  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.082543  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.178344  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.196633  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.196665  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.196733  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.209763  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.209830  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.310142  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.317659  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.317731  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.327221  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.331490  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.346298  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 19:08:23.346412  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.435656  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 19:08:23.435679  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1008 19:08:23.435783  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:23.435788  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:23.441591  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 19:08:23.441673  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:23.441696  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 19:08:23.441782  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 19:08:23.441814  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:23.441856  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:23.441901  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1008 19:08:23.441918  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.441947  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.445597  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1008 19:08:23.445630  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1008 19:08:23.449022  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1008 19:08:23.450009  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.373452  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:24.872600  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:23.448074  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:25.449287  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.950280  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.508431356s)
	I1008 19:08:25.950340  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.508402491s)
	I1008 19:08:25.950344  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1008 19:08:25.950357  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1008 19:08:25.950545  584371 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.50050623s)
	I1008 19:08:25.950600  584371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1008 19:08:25.950611  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.508516442s)
	I1008 19:08:25.950637  584371 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:25.950648  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1008 19:08:25.950680  584371 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:25.950688  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:25.950727  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:29.225357  584371 ssh_runner.go:235] Completed: which crictl: (3.274648192s)
	I1008 19:08:29.225514  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:29.225532  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.27477814s)
	I1008 19:08:29.225561  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1008 19:08:29.225593  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:29.225627  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:27.373617  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.374173  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:27.948313  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.948750  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.696201  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.470655089s)
	I1008 19:08:30.696255  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.470604601s)
	I1008 19:08:30.696284  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1008 19:08:30.696296  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:30.696317  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.696365  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.740520  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:32.685896  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.989500601s)
	I1008 19:08:32.685941  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1008 19:08:32.685971  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.685971  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.945412846s)
	I1008 19:08:32.686046  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.686045  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 19:08:32.686186  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:31.872718  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:33.873665  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:32.447765  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:34.948257  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.663874  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.977781248s)
	I1008 19:08:34.663914  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1008 19:08:34.663939  584371 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:34.663942  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.977724244s)
	I1008 19:08:34.663973  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1008 19:08:34.663991  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:36.833283  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.169263327s)
	I1008 19:08:36.833320  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1008 19:08:36.833353  584371 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:36.833417  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:37.485901  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 19:08:37.485954  584371 cache_images.go:123] Successfully loaded all cached images
	I1008 19:08:37.485961  584371 cache_images.go:92] duration metric: took 14.844810749s to LoadCachedImages
	I1008 19:08:37.485973  584371 kubeadm.go:934] updating node { 192.168.61.141 8443 v1.31.1 crio true true} ...
	I1008 19:08:37.486084  584371 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-966632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:37.486149  584371 ssh_runner.go:195] Run: crio config
	I1008 19:08:37.544511  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:37.544535  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:37.544554  584371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:37.544576  584371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-966632 NodeName:no-preload-966632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:08:37.544718  584371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-966632"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:37.544792  584371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:08:37.556979  584371 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:37.557049  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:37.566249  584371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 19:08:37.583303  584371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:37.599535  584371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1008 19:08:37.616315  584371 ssh_runner.go:195] Run: grep 192.168.61.141	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:37.620089  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:37.632181  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:37.748647  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:37.765577  584371 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632 for IP: 192.168.61.141
	I1008 19:08:37.765600  584371 certs.go:194] generating shared ca certs ...
	I1008 19:08:37.765619  584371 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:37.765829  584371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:37.765890  584371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:37.765904  584371 certs.go:256] generating profile certs ...
	I1008 19:08:37.766020  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.key
	I1008 19:08:37.766095  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key.a515ed11
	I1008 19:08:37.766143  584371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key
	I1008 19:08:37.766334  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:37.766383  584371 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:37.766398  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:37.766430  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:37.766467  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:37.766501  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:37.766562  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:37.767588  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:37.804400  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:37.837466  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:37.865516  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:37.894827  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:08:37.918668  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:08:37.948238  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:37.974152  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:08:37.997284  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:38.019295  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:38.043392  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:38.067971  584371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:38.084940  584371 ssh_runner.go:195] Run: openssl version
	I1008 19:08:38.090779  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:38.102715  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107292  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107355  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.113456  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:38.123904  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:38.134337  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138503  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138561  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.143902  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:38.155393  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:38.167107  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171433  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171480  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.176968  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:38.188437  584371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:38.192733  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:38.198531  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:38.204187  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:38.210522  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:38.216328  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:38.222077  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:38.227724  584371 kubeadm.go:392] StartCluster: {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:38.227802  584371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:38.227882  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.262461  584371 cri.go:89] found id: ""
	I1008 19:08:38.262532  584371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:38.272591  584371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:38.272612  584371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:38.272677  584371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:38.282621  584371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:38.283683  584371 kubeconfig.go:125] found "no-preload-966632" server: "https://192.168.61.141:8443"
	I1008 19:08:38.286019  584371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:38.295315  584371 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.141
	I1008 19:08:38.295344  584371 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:38.295357  584371 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:38.295400  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.329462  584371 cri.go:89] found id: ""
	I1008 19:08:38.329533  584371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:38.345901  584371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:38.354899  584371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:38.354920  584371 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:38.354965  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:38.363242  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:38.363282  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:38.373063  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:38.381479  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:38.381530  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:38.390679  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.400033  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:38.400071  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.409308  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:38.417842  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:38.417876  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:38.427251  584371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:38.437010  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:38.562381  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.344247  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:36.372911  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:38.872768  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:37.448043  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:39.956579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.550458  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.619345  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.718016  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:39.718126  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.218974  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.719108  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.741178  584371 api_server.go:72] duration metric: took 1.023163924s to wait for apiserver process to appear ...
	I1008 19:08:40.741210  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:08:40.741235  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:40.741767  584371 api_server.go:269] stopped: https://192.168.61.141:8443/healthz: Get "https://192.168.61.141:8443/healthz": dial tcp 192.168.61.141:8443: connect: connection refused
	I1008 19:08:41.241356  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.787235  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:08:43.787284  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:08:43.787306  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.914606  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:43.914653  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:44.242033  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.247068  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.247097  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:40.873394  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:43.373475  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:42.446900  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:44.447141  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.742212  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.756340  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.756371  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.241997  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.246343  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.246367  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.741898  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.749274  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.749301  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.241889  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.246127  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.246155  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.741694  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.746192  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.746219  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:47.242250  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:47.246571  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:08:47.252812  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:08:47.252843  584371 api_server.go:131] duration metric: took 6.511626175s to wait for apiserver health ...
	I1008 19:08:47.252852  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:47.252858  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:47.254723  584371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:08:47.255933  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:08:47.266073  584371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:08:47.284042  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:08:47.293401  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:08:47.293432  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:08:47.293439  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:08:47.293450  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:08:47.293456  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:08:47.293464  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:08:47.293469  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:08:47.293474  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:08:47.293478  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:08:47.293484  584371 system_pods.go:74] duration metric: took 9.422158ms to wait for pod list to return data ...
	I1008 19:08:47.293493  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:08:47.296923  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:08:47.296947  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:08:47.296960  584371 node_conditions.go:105] duration metric: took 3.462212ms to run NodePressure ...
	I1008 19:08:47.296979  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:47.562271  584371 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566914  584371 kubeadm.go:739] kubelet initialised
	I1008 19:08:47.566938  584371 kubeadm.go:740] duration metric: took 4.63692ms waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566950  584371 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:47.571271  584371 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.575633  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575659  584371 pod_ready.go:82] duration metric: took 4.364181ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.575671  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575680  584371 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.579443  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579465  584371 pod_ready.go:82] duration metric: took 3.775248ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.579475  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579483  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.583747  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583775  584371 pod_ready.go:82] duration metric: took 4.277306ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.583785  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583797  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.687618  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687652  584371 pod_ready.go:82] duration metric: took 103.843425ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.687663  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687669  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.087568  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087601  584371 pod_ready.go:82] duration metric: took 399.92202ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.087613  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087622  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.487223  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487256  584371 pod_ready.go:82] duration metric: took 399.625038ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.487269  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487278  584371 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.887764  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887798  584371 pod_ready.go:82] duration metric: took 400.504473ms for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.887812  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887821  584371 pod_ready.go:39] duration metric: took 1.320859293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:48.887842  584371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:08:48.901255  584371 ops.go:34] apiserver oom_adj: -16
	I1008 19:08:48.901279  584371 kubeadm.go:597] duration metric: took 10.628659432s to restartPrimaryControlPlane
	I1008 19:08:48.901290  584371 kubeadm.go:394] duration metric: took 10.673572592s to StartCluster
	I1008 19:08:48.901313  584371 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.901397  584371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:48.904024  584371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.904361  584371 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:08:48.904455  584371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:08:48.904549  584371 addons.go:69] Setting storage-provisioner=true in profile "no-preload-966632"
	I1008 19:08:48.904565  584371 addons.go:69] Setting default-storageclass=true in profile "no-preload-966632"
	I1008 19:08:48.904594  584371 addons.go:234] Setting addon storage-provisioner=true in "no-preload-966632"
	W1008 19:08:48.904603  584371 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:08:48.904603  584371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-966632"
	I1008 19:08:48.904574  584371 addons.go:69] Setting metrics-server=true in profile "no-preload-966632"
	I1008 19:08:48.904646  584371 addons.go:234] Setting addon metrics-server=true in "no-preload-966632"
	I1008 19:08:48.904651  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.904652  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1008 19:08:48.904670  584371 addons.go:243] addon metrics-server should already be in state true
	I1008 19:08:48.904705  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.905079  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905116  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905133  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905151  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905159  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905205  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.906774  584371 out.go:177] * Verifying Kubernetes components...
	I1008 19:08:48.908138  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:48.942865  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1008 19:08:48.943612  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.944201  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.944232  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.944667  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.944748  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1008 19:08:48.945485  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.945526  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.945763  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.946464  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.946484  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.946530  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1008 19:08:48.946935  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.947052  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.947649  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.947693  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.948006  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.948027  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.948379  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.948602  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.951770  584371 addons.go:234] Setting addon default-storageclass=true in "no-preload-966632"
	W1008 19:08:48.951788  584371 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:08:48.951819  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.952055  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.952095  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.962422  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1008 19:08:48.962931  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.963509  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.963532  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.963908  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.964117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.965879  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.967812  584371 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:08:48.967853  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1008 19:08:48.967817  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1008 19:08:48.968376  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968436  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968885  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.968906  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.968964  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:08:48.968986  584371 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:08:48.969010  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.969290  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.969449  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.969472  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.969910  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.969941  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.970187  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.970430  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.972100  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972523  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.972544  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972677  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.972735  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.973016  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.973191  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.973323  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.974390  584371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:48.975651  584371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:48.975670  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:08:48.975686  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.978500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.978855  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.978876  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.979079  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.979474  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.979640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.979766  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.994846  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1008 19:08:48.995180  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.995592  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.995607  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.995976  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.996173  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.998270  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.998549  584371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:48.998568  584371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:08:48.998591  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:49.000647  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.000908  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:49.000924  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.001078  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:49.001185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:49.001282  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:49.001358  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:49.118217  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:49.138077  584371 node_ready.go:35] waiting up to 6m0s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:49.217300  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:49.241237  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:49.365395  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:08:49.365420  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:08:45.873500  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.373215  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:49.403596  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:08:49.403625  584371 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:08:49.438480  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:49.438540  584371 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:08:49.464366  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:50.474783  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.233506833s)
	I1008 19:08:50.474850  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474862  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.474914  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.257567473s)
	I1008 19:08:50.474955  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474964  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475191  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475206  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475215  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475221  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475280  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475289  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475297  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475303  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475310  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475441  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475454  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475582  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475596  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475628  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482003  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.482031  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.482315  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.482351  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482372  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.512902  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.048483922s)
	I1008 19:08:50.512957  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.512980  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513241  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513257  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513261  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513299  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.513307  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513534  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513552  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513561  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513577  584371 addons.go:475] Verifying addon metrics-server=true in "no-preload-966632"
	I1008 19:08:50.515302  584371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:08:46.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.448332  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:50.449239  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.516457  584371 addons.go:510] duration metric: took 1.612011936s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:08:51.141437  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:53.142166  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:54.141208  584371 node_ready.go:49] node "no-preload-966632" has status "Ready":"True"
	I1008 19:08:54.141238  584371 node_ready.go:38] duration metric: took 5.003121669s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:54.141251  584371 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:54.146685  584371 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151059  584371 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:54.151078  584371 pod_ready.go:82] duration metric: took 4.369406ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151086  584371 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:50.872416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:53.372230  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:52.947461  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:54.950183  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.157153  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.157458  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.658595  584371 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.658617  584371 pod_ready.go:82] duration metric: took 4.507524391s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.658627  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663785  584371 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.663811  584371 pod_ready.go:82] duration metric: took 5.176586ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663823  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668310  584371 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.668342  584371 pod_ready.go:82] duration metric: took 4.509914ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668356  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672380  584371 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.672397  584371 pod_ready.go:82] duration metric: took 4.034104ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672405  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676499  584371 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.676517  584371 pod_ready.go:82] duration metric: took 4.106343ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676527  584371 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:55.873069  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.372424  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:57.448182  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:59.947932  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.682583  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.682958  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:00.872650  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.872783  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:05.371825  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.447340  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:04.447504  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.683354  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.183362  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.183636  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.872083  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.874058  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.947502  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:08.948054  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.683833  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.188267  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:12.371967  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.372404  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.448009  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:13.948106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:15.948926  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:16.683453  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.184073  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:16.873511  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.372353  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:18.446864  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:20.449087  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:21.184209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.683535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:21.373869  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.872606  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:22.947224  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.947961  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:26.183587  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.184349  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:26.373049  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.873435  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.447254  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:29.447470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:30.683953  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.183412  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.371635  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.373244  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.448173  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.458099  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.947741  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:35.184447  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.682974  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.872226  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.872308  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:39.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:38.447494  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:40.947078  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:39.686907  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.183930  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.184443  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.372207  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.872360  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.947712  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:45.447011  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:46.682572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.683539  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.372618  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.373137  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.448127  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.947787  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:50.683784  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:53.183034  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.873417  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.373414  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.948470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.447675  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:55.184058  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.683581  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.872213  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.872323  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.447790  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.947901  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.948561  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:00.182341  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.183137  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:04.186590  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.872469  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.872708  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.372543  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.447536  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.947676  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.683572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.183605  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.373804  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.872253  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.947764  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.948705  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:11.184118  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:13.184173  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.372084  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.373845  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.447832  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.948882  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:15.682883  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.184458  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:16.374416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.872958  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:17.446820  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.448067  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:20.686987  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.182360  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.372104  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.872156  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.947427  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.950711  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:25.183749  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:27.184437  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.372132  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.372299  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.948117  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.948772  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:10:29.682860  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:31.683468  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:34.184018  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.872607  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:32.872784  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.373822  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:33.447573  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.447614  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:36.682470  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:38.683099  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.872270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.872490  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.450208  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.947418  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:40.683975  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.182280  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.372711  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.873233  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.446673  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.447301  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:45.188927  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.682373  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.371936  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:49.372190  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.447866  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:48.947579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.683659  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.182955  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:54.184535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.373113  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.373239  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.446974  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.447804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.947655  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:56.682535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.683610  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.872286  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:57.872418  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:59.872720  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.447354  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:00.447456  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:00.684164  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:03.182802  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.873903  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.372767  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:02.947088  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.948196  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:05.683195  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.184305  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:06.872641  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.872811  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.447814  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:09.948377  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:10.186012  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.682829  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:11.374597  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:13.872241  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.447966  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.448396  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.683559  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.185276  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.372851  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:18.872479  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.947677  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:19.449701  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:19.682652  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.683209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.684681  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.371629  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.373290  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.947319  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:24.446685  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.183070  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.185526  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:25.872866  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.371245  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.371878  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.447559  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.948355  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.949170  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:30.683714  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.182628  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.373119  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:34.872336  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.447847  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:35.947756  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:35.183021  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.682254  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.371644  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:39.372270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.447549  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:40.946928  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.182928  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.873545  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.372751  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.947699  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.949106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:44.684067  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.182248  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.183397  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:46.871983  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:48.872101  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.447284  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.448275  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.949171  585014 pod_ready.go:82] duration metric: took 4m0.008012606s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:11:50.949202  585014 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:11:50.949213  585014 pod_ready.go:39] duration metric: took 4m6.974004451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:11:50.949249  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:11:50.949290  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.949351  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.998560  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:50.998584  585014 cri.go:89] found id: ""
	I1008 19:11:50.998591  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:11:50.998649  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.003407  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:51.003490  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:51.183611  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:53.683418  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.872692  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:52.873320  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:55.372722  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:51.049315  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:51.049343  585014 cri.go:89] found id: ""
	I1008 19:11:51.049353  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:11:51.049411  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.055212  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:51.055281  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:51.101271  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.101292  585014 cri.go:89] found id: ""
	I1008 19:11:51.101300  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:11:51.101360  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.105902  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:51.105966  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:51.150355  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.150390  585014 cri.go:89] found id: ""
	I1008 19:11:51.150402  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:11:51.150468  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.155116  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:51.155193  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:51.197754  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:51.197779  585014 cri.go:89] found id: ""
	I1008 19:11:51.197790  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:11:51.197846  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.201957  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:51.202023  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:51.239982  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:51.240001  585014 cri.go:89] found id: ""
	I1008 19:11:51.240009  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:11:51.240064  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.244580  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:51.244645  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:51.280099  585014 cri.go:89] found id: ""
	I1008 19:11:51.280126  585014 logs.go:282] 0 containers: []
	W1008 19:11:51.280137  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:51.280144  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:11:51.280205  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:11:51.323467  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:51.323508  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:51.323514  585014 cri.go:89] found id: ""
	I1008 19:11:51.323525  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:11:51.323676  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.328091  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.332113  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:51.332139  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:11:51.455430  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:11:51.455463  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.492792  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:11:51.492824  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.533732  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.533768  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:52.085919  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:11:52.085972  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:52.120874  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:52.120912  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:11:52.163961  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164188  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.164330  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164489  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.195681  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:52.195716  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:52.210569  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:11:52.210601  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:52.256667  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:11:52.256700  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:52.303627  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:11:52.303685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:52.340250  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:11:52.340279  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:52.402179  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:11:52.402213  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:52.440288  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:11:52.440326  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:52.478952  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.478979  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:11:52.479043  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:11:52.479060  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479068  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.479077  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479084  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.479092  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.479101  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.182942  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:58.184307  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:57.373902  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:59.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:00.683230  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:03.184294  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.372439  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:04.871327  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.480085  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.498401  585014 api_server.go:72] duration metric: took 4m26.226421652s to wait for apiserver process to appear ...
	I1008 19:12:02.498433  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:12:02.498479  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.498544  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:02.533531  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:02.533563  585014 cri.go:89] found id: ""
	I1008 19:12:02.533575  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:02.533643  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.537914  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:02.537985  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:02.579011  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:02.579039  585014 cri.go:89] found id: ""
	I1008 19:12:02.579049  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:02.579111  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.583628  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:02.583695  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:02.625038  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.625065  585014 cri.go:89] found id: ""
	I1008 19:12:02.625075  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:02.625138  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.629262  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:02.629331  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:02.662964  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:02.662988  585014 cri.go:89] found id: ""
	I1008 19:12:02.662997  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:02.663052  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.666955  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:02.667013  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:02.704552  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:02.704578  585014 cri.go:89] found id: ""
	I1008 19:12:02.704589  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:02.704640  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.708910  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:02.708962  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:02.743196  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.743220  585014 cri.go:89] found id: ""
	I1008 19:12:02.743229  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:02.743276  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.747488  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:02.747563  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:02.789367  585014 cri.go:89] found id: ""
	I1008 19:12:02.789405  585014 logs.go:282] 0 containers: []
	W1008 19:12:02.789418  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:02.789426  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:02.789479  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:02.828607  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:02.828640  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.828646  585014 cri.go:89] found id: ""
	I1008 19:12:02.828656  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:02.828723  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.832981  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.837258  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:02.837284  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.874214  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:02.874249  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.925844  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:02.925879  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.963715  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:02.963744  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.009069  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.009102  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:03.046628  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.046816  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.046947  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.047129  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.080027  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.080068  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:03.203192  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:03.203233  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:03.254645  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:03.254681  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:03.300881  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:03.300918  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:03.347403  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.347440  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.802754  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.802801  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.816658  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:03.816695  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:03.873630  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:03.873670  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:03.910834  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.910862  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:03.910932  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:03.910946  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910955  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.910972  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910983  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.910994  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.911006  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:05.682916  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:07.683273  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:06.872011  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:08.873682  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:09.684055  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.182786  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:14.186868  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:10.866893  585096 pod_ready.go:82] duration metric: took 4m0.000956825s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:10.866937  585096 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1008 19:12:10.866961  585096 pod_ready.go:39] duration metric: took 4m15.184400794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:10.866992  585096 kubeadm.go:597] duration metric: took 4m23.829186185s to restartPrimaryControlPlane
	W1008 19:12:10.867049  585096 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:10.867092  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:13.911944  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:12:13.917530  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:12:13.918513  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:12:13.918537  585014 api_server.go:131] duration metric: took 11.420096691s to wait for apiserver health ...
	I1008 19:12:13.918546  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:12:13.918570  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:13.918621  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:13.957026  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:13.957048  585014 cri.go:89] found id: ""
	I1008 19:12:13.957057  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:13.957114  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:13.961553  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:13.961611  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:13.996466  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:13.996497  585014 cri.go:89] found id: ""
	I1008 19:12:13.996508  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:13.996570  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.000972  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:14.001036  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:14.034888  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.034917  585014 cri.go:89] found id: ""
	I1008 19:12:14.034929  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:14.034989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.039145  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:14.039216  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:14.074109  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:14.074134  585014 cri.go:89] found id: ""
	I1008 19:12:14.074145  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:14.074202  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.078291  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:14.078371  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:14.113375  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:14.113403  585014 cri.go:89] found id: ""
	I1008 19:12:14.113413  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:14.113475  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.117909  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:14.118002  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:14.153800  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:14.153823  585014 cri.go:89] found id: ""
	I1008 19:12:14.153833  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:14.153898  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.158233  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:14.158302  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:14.195093  585014 cri.go:89] found id: ""
	I1008 19:12:14.195123  585014 logs.go:282] 0 containers: []
	W1008 19:12:14.195133  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:14.195142  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:14.195203  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:14.230894  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:14.230917  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:14.230921  585014 cri.go:89] found id: ""
	I1008 19:12:14.230929  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:14.230989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.236299  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.240914  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:14.240940  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:14.282289  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282488  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:14.282643  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282824  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:14.315207  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:14.315235  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:14.433616  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:14.433647  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:14.482640  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:14.482685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.524749  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:14.524788  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:14.979562  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:14.979629  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:15.016898  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:15.016941  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:15.058447  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:15.058478  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:15.114345  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:15.114384  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:15.128920  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:15.128948  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:15.176775  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:15.176817  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:15.215091  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:15.215129  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:15.256687  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:15.256731  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:15.311551  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311583  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:15.311641  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:15.311653  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311664  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:15.311676  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311681  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:15.311687  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311695  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:16.682464  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:19.182595  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:21.184074  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:23.682867  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:25.319305  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:12:25.319336  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.319340  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.319344  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.319348  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.319351  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.319354  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.319362  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.319365  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.319371  585014 system_pods.go:74] duration metric: took 11.400819931s to wait for pod list to return data ...
	I1008 19:12:25.319378  585014 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:12:25.322115  585014 default_sa.go:45] found service account: "default"
	I1008 19:12:25.322135  585014 default_sa.go:55] duration metric: took 2.751457ms for default service account to be created ...
	I1008 19:12:25.322143  585014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:12:25.326570  585014 system_pods.go:86] 8 kube-system pods found
	I1008 19:12:25.326590  585014 system_pods.go:89] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.326595  585014 system_pods.go:89] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.326599  585014 system_pods.go:89] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.326604  585014 system_pods.go:89] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.326610  585014 system_pods.go:89] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.326615  585014 system_pods.go:89] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.326625  585014 system_pods.go:89] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.326633  585014 system_pods.go:89] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.326642  585014 system_pods.go:126] duration metric: took 4.494323ms to wait for k8s-apps to be running ...
	I1008 19:12:25.326651  585014 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:12:25.326701  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:25.344597  585014 system_svc.go:56] duration metric: took 17.941012ms WaitForService to wait for kubelet
	I1008 19:12:25.344621  585014 kubeadm.go:582] duration metric: took 4m49.072648847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:12:25.344638  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:12:25.347385  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:12:25.347404  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:12:25.347425  585014 node_conditions.go:105] duration metric: took 2.783181ms to run NodePressure ...
	I1008 19:12:25.347437  585014 start.go:241] waiting for startup goroutines ...
	I1008 19:12:25.347450  585014 start.go:246] waiting for cluster config update ...
	I1008 19:12:25.347463  585014 start.go:255] writing updated cluster config ...
	I1008 19:12:25.347823  585014 ssh_runner.go:195] Run: rm -f paused
	I1008 19:12:25.395903  585014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:12:25.397911  585014 out.go:177] * Done! kubectl is now configured to use "embed-certs-783146" cluster and "default" namespace by default
	I1008 19:12:25.683645  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:28.182995  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:30.183567  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:32.682881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.013046  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.145916528s)
	I1008 19:12:37.013156  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:37.028010  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:37.037493  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:37.046435  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:37.046455  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:37.046495  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:12:37.055422  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:37.055482  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:37.064538  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:12:37.072968  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:37.073021  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:37.081754  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.090143  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:37.090179  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.098726  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:12:37.107261  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:37.107308  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:37.115975  585096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:37.163570  585096 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:12:37.163642  585096 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:37.272891  585096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:37.273025  585096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:37.273151  585096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:12:37.284204  585096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:37.286084  585096 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:37.286175  585096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:37.286263  585096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:37.286385  585096 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:37.286443  585096 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:37.286545  585096 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:37.286638  585096 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:37.286729  585096 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:37.286812  585096 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:37.286912  585096 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:37.287010  585096 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:37.287082  585096 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:37.287172  585096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:37.602946  585096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:37.727897  585096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:12:37.932126  585096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:37.989742  585096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:38.036655  585096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:38.037085  585096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:38.040618  585096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:35.182881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.683718  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:38.042238  585096 out.go:235]   - Booting up control plane ...
	I1008 19:12:38.042374  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:38.042568  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:38.043504  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:38.065666  585096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:38.071727  585096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:38.071814  585096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:38.210382  585096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:12:38.210516  585096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:12:39.213697  585096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003319891s
	I1008 19:12:39.213803  585096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:12:43.717718  585096 kubeadm.go:310] [api-check] The API server is healthy after 4.502167036s
	I1008 19:12:43.728628  585096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:12:43.744283  585096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:12:43.775369  585096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:12:43.775621  585096 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-142496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:12:43.788583  585096 kubeadm.go:310] [bootstrap-token] Using token: srsq4v.7le212xun40ljc7w
	I1008 19:12:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:42.183680  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:44.185065  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:43.789834  585096 out.go:235]   - Configuring RBAC rules ...
	I1008 19:12:43.789945  585096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:12:43.796091  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:12:43.807906  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:12:43.811025  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:12:43.814445  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:12:43.817615  585096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:12:44.122839  585096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:12:44.567387  585096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:12:45.122714  585096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:12:45.123480  585096 kubeadm.go:310] 
	I1008 19:12:45.123590  585096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:12:45.123617  585096 kubeadm.go:310] 
	I1008 19:12:45.123740  585096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:12:45.123749  585096 kubeadm.go:310] 
	I1008 19:12:45.123789  585096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:12:45.123870  585096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:12:45.123958  585096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:12:45.123984  585096 kubeadm.go:310] 
	I1008 19:12:45.124064  585096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:12:45.124080  585096 kubeadm.go:310] 
	I1008 19:12:45.124152  585096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:12:45.124162  585096 kubeadm.go:310] 
	I1008 19:12:45.124248  585096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:12:45.124366  585096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:12:45.124456  585096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:12:45.124469  585096 kubeadm.go:310] 
	I1008 19:12:45.124579  585096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:12:45.124682  585096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:12:45.124692  585096 kubeadm.go:310] 
	I1008 19:12:45.124804  585096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.124926  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:12:45.124953  585096 kubeadm.go:310] 	--control-plane 
	I1008 19:12:45.124958  585096 kubeadm.go:310] 
	I1008 19:12:45.125086  585096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:12:45.125093  585096 kubeadm.go:310] 
	I1008 19:12:45.125182  585096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.125321  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:12:45.126852  585096 kubeadm.go:310] W1008 19:12:37.105673    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127231  585096 kubeadm.go:310] W1008 19:12:37.106373    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127380  585096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:12:45.127429  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:12:45.127452  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:12:45.129742  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:12:45.130870  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:12:45.143909  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:12:45.170901  585096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:12:45.170965  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:45.170972  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-142496 minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=default-k8s-diff-port-142496 minikube.k8s.io/primary=true
	I1008 19:12:45.198031  585096 ops.go:34] apiserver oom_adj: -16
	I1008 19:12:45.385789  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.684251  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:49.183225  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:45.886434  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.386165  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.886920  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.386786  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.885835  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.386706  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.885981  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.386856  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.471554  585096 kubeadm.go:1113] duration metric: took 4.300656747s to wait for elevateKubeSystemPrivileges
	I1008 19:12:49.471596  585096 kubeadm.go:394] duration metric: took 5m2.486064826s to StartCluster
	I1008 19:12:49.471627  585096 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.471736  585096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:12:49.473381  585096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.473676  585096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:12:49.473768  585096 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:12:49.473874  585096 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473897  585096 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142496"
	I1008 19:12:49.473899  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:12:49.473904  585096 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473923  585096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142496"
	W1008 19:12:49.473907  585096 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:12:49.473939  585096 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473955  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.473967  585096 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.473981  585096 addons.go:243] addon metrics-server should already be in state true
	I1008 19:12:49.474022  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.474283  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474313  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474338  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474366  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474373  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474405  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.475217  585096 out.go:177] * Verifying Kubernetes components...
	I1008 19:12:49.476402  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:12:49.490880  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1008 19:12:49.491405  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.492070  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.492093  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.492454  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.492990  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.493040  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.493623  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 19:12:49.493646  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1008 19:12:49.494011  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494067  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494548  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494565  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494763  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494790  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494930  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495102  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.495276  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495871  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.495908  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.498744  585096 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.498764  585096 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:12:49.498787  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.499142  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.499173  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.514047  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1008 19:12:49.514527  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.515028  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.515046  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.515493  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.515662  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.516519  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1008 19:12:49.517015  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.517643  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.517661  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.517706  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.517757  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I1008 19:12:49.518133  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.518458  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.518617  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.518643  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.518681  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.519107  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.519527  585096 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:12:49.519808  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.519923  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.520415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.520624  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:12:49.520644  585096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:12:49.520669  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.522226  585096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:12:49.523372  585096 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.523396  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:12:49.523415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.523947  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524437  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.524464  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.524830  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.525042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.525198  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.527349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.527693  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527842  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.528009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.528186  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.528325  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.536509  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I1008 19:12:49.536879  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.537341  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.537359  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.537606  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.537897  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.539570  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.539810  585096 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.539831  585096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:12:49.539848  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.542955  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.543522  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543543  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.543726  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.543888  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.544023  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.721845  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:12:49.741622  585096 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.763968  585096 node_ready.go:49] node "default-k8s-diff-port-142496" has status "Ready":"True"
	I1008 19:12:49.764005  585096 node_ready.go:38] duration metric: took 22.348135ms for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.764019  585096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:49.793150  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:49.867565  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.904041  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.912694  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:12:49.912723  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:12:49.962053  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:12:49.962082  585096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:12:50.004678  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.004709  585096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:12:50.068528  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.394807  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394824  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394836  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.394841  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395140  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395161  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395172  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395181  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395181  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395195  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395201  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.395205  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395262  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395425  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395439  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395616  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395668  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395643  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416509  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.416532  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.416815  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416865  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.416880  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634404  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634428  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634722  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.634744  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634752  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634761  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634769  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635036  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635066  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.635079  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.635100  585096 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-142496"
	I1008 19:12:50.636555  585096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:12:51.683959  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.182376  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:50.637816  585096 addons.go:510] duration metric: took 1.164063633s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:12:51.799881  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.299619  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:56.183179  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683102  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683159  584371 pod_ready.go:82] duration metric: took 4m0.006623922s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:58.683173  584371 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:12:58.683184  584371 pod_ready.go:39] duration metric: took 4m4.541923995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:58.683207  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:12:58.683245  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:58.683296  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:58.729385  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:58.729407  584371 cri.go:89] found id: ""
	I1008 19:12:58.729417  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:12:58.729472  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.734291  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:58.734382  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:58.772015  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:12:58.772050  584371 cri.go:89] found id: ""
	I1008 19:12:58.772062  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:12:58.772123  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.776231  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:58.776300  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:58.812962  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:58.812982  584371 cri.go:89] found id: ""
	I1008 19:12:58.812991  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:12:58.813046  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.816951  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:58.817002  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:58.852918  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:58.852939  584371 cri.go:89] found id: ""
	I1008 19:12:58.852946  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:12:58.852992  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.857184  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:58.857245  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:58.895233  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:12:58.895254  584371 cri.go:89] found id: ""
	I1008 19:12:58.895264  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:12:58.895317  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.899301  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:58.899354  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:58.933918  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:58.933946  584371 cri.go:89] found id: ""
	I1008 19:12:58.933956  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:12:58.934003  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.938274  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:58.938361  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:58.980067  584371 cri.go:89] found id: ""
	I1008 19:12:58.980094  584371 logs.go:282] 0 containers: []
	W1008 19:12:58.980104  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:58.980113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:58.980174  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:59.013783  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:12:59.013812  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.013817  584371 cri.go:89] found id: ""
	I1008 19:12:59.013827  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:12:59.013886  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.018420  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.024462  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:12:59.024486  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.062654  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:12:59.062688  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:59.110932  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:59.110966  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:59.248699  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:12:59.248734  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:59.294439  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:12:59.294473  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:59.331208  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:12:59.331241  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:59.374242  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:12:59.374283  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:56.799487  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.800290  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:59.800320  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.800349  585096 pod_ready.go:82] duration metric: took 10.007162242s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.800361  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804590  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.804609  585096 pod_ready.go:82] duration metric: took 4.240474ms for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804620  585096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808737  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.808754  585096 pod_ready.go:82] duration metric: took 4.127686ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808762  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813126  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.813146  585096 pod_ready.go:82] duration metric: took 4.37796ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813154  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817020  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.817039  585096 pod_ready.go:82] duration metric: took 3.878053ms for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817048  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197958  585096 pod_ready.go:93] pod "kube-proxy-wd5kv" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.197983  585096 pod_ready.go:82] duration metric: took 380.928087ms for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197992  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597495  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.597521  585096 pod_ready.go:82] duration metric: took 399.522182ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597529  585096 pod_ready.go:39] duration metric: took 10.833495765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:13:00.597545  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:13:00.597612  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:00.613266  585096 api_server.go:72] duration metric: took 11.139554705s to wait for apiserver process to appear ...
	I1008 19:13:00.613289  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:00.613308  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:13:00.618420  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:13:00.619376  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:00.619399  585096 api_server.go:131] duration metric: took 6.102941ms to wait for apiserver health ...
	I1008 19:13:00.619407  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:00.800687  585096 system_pods.go:59] 9 kube-system pods found
	I1008 19:13:00.800720  585096 system_pods.go:61] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:00.800729  585096 system_pods.go:61] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:00.800733  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:00.800737  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:00.800740  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:00.800743  585096 system_pods.go:61] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:00.800747  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:00.800752  585096 system_pods.go:61] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:00.800755  585096 system_pods.go:61] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:00.800765  585096 system_pods.go:74] duration metric: took 181.352111ms to wait for pod list to return data ...
	I1008 19:13:00.800773  585096 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:00.997631  585096 default_sa.go:45] found service account: "default"
	I1008 19:13:00.997657  585096 default_sa.go:55] duration metric: took 196.876434ms for default service account to be created ...
	I1008 19:13:00.997667  585096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:01.199366  585096 system_pods.go:86] 9 kube-system pods found
	I1008 19:13:01.199396  585096 system_pods.go:89] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:01.199402  585096 system_pods.go:89] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:01.199406  585096 system_pods.go:89] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:01.199409  585096 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:01.199413  585096 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:01.199416  585096 system_pods.go:89] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:01.199419  585096 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:01.199426  585096 system_pods.go:89] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:01.199430  585096 system_pods.go:89] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:01.199439  585096 system_pods.go:126] duration metric: took 201.766214ms to wait for k8s-apps to be running ...
	I1008 19:13:01.199447  585096 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:01.199492  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:01.214863  585096 system_svc.go:56] duration metric: took 15.401989ms WaitForService to wait for kubelet
	I1008 19:13:01.214895  585096 kubeadm.go:582] duration metric: took 11.741185862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:01.214919  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:01.397506  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:01.397530  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:01.397541  585096 node_conditions.go:105] duration metric: took 182.616774ms to run NodePressure ...
	I1008 19:13:01.397553  585096 start.go:241] waiting for startup goroutines ...
	I1008 19:13:01.397560  585096 start.go:246] waiting for cluster config update ...
	I1008 19:13:01.397570  585096 start.go:255] writing updated cluster config ...
	I1008 19:13:01.397828  585096 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:01.448158  585096 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:01.450201  585096 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142496" cluster and "default" namespace by default
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:59.438777  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:59.438814  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:59.945253  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:59.945302  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:00.016570  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:00.016607  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:00.034150  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:00.034183  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:00.075423  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:00.075456  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:00.111132  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:00.111164  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.646570  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:02.666594  584371 api_server.go:72] duration metric: took 4m13.762192057s to wait for apiserver process to appear ...
	I1008 19:13:02.666620  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:02.666663  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:02.666718  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:02.704214  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:02.704242  584371 cri.go:89] found id: ""
	I1008 19:13:02.704250  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:02.704298  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.708636  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:02.708717  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:02.748418  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:02.748444  584371 cri.go:89] found id: ""
	I1008 19:13:02.748455  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:02.748515  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.753267  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:02.753332  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:02.790534  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:02.790562  584371 cri.go:89] found id: ""
	I1008 19:13:02.790571  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:02.790636  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.794880  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:02.794950  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:02.834754  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:02.834774  584371 cri.go:89] found id: ""
	I1008 19:13:02.834781  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:02.834830  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.839391  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:02.839463  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:02.878344  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:02.878371  584371 cri.go:89] found id: ""
	I1008 19:13:02.878380  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:02.878425  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.882939  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:02.883025  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:02.920081  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:02.920104  584371 cri.go:89] found id: ""
	I1008 19:13:02.920112  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:02.920168  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.924141  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:02.924205  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:02.959700  584371 cri.go:89] found id: ""
	I1008 19:13:02.959730  584371 logs.go:282] 0 containers: []
	W1008 19:13:02.959741  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:02.959750  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:02.959822  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:02.996900  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.996927  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:02.996933  584371 cri.go:89] found id: ""
	I1008 19:13:02.996940  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:02.996989  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.001152  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.005021  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:03.005046  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:03.069775  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:03.069813  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:03.120028  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:03.120060  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:03.155756  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:03.155784  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:03.195587  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:03.195624  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:03.231844  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:03.231875  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:03.271156  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:03.271187  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:03.286994  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:03.287017  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:03.397237  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:03.397269  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:03.442373  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:03.442407  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:03.500191  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:03.500222  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:03.535448  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:03.535490  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:03.966382  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:03.966425  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:06.513885  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:13:06.518111  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:13:06.519310  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:06.519331  584371 api_server.go:131] duration metric: took 3.852704338s to wait for apiserver health ...
	I1008 19:13:06.519341  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:06.519370  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:06.519417  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:06.558940  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:06.558965  584371 cri.go:89] found id: ""
	I1008 19:13:06.558979  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:06.559029  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.563471  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:06.563537  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:06.607844  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:06.607873  584371 cri.go:89] found id: ""
	I1008 19:13:06.607883  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:06.607944  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.612399  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:06.612456  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:06.645502  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:06.645521  584371 cri.go:89] found id: ""
	I1008 19:13:06.645528  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:06.645575  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.649442  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:06.649519  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:06.685085  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:06.685114  584371 cri.go:89] found id: ""
	I1008 19:13:06.685126  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:06.685183  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.689859  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:06.689935  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:06.724775  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:06.724803  584371 cri.go:89] found id: ""
	I1008 19:13:06.724814  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:06.724873  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.729489  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:06.729542  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:06.776599  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:06.776626  584371 cri.go:89] found id: ""
	I1008 19:13:06.776636  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:06.776704  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.780790  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:06.780863  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:06.817072  584371 cri.go:89] found id: ""
	I1008 19:13:06.817097  584371 logs.go:282] 0 containers: []
	W1008 19:13:06.817106  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:06.817113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:06.817171  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:06.855429  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:06.855453  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:06.855457  584371 cri.go:89] found id: ""
	I1008 19:13:06.855465  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:06.855520  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.859774  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.863800  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:06.863821  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:06.931413  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:06.931443  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:06.946213  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:06.946236  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:07.070604  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:07.070640  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:07.114749  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:07.114782  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:07.152555  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:07.152584  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:07.192730  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:07.192759  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:07.242001  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:07.242036  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:07.612662  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:07.612714  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:07.656655  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:07.656700  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:07.695462  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:07.695494  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:07.733107  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:07.733143  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:07.779348  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:07.779382  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:10.325584  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:13:10.325616  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.325620  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.325624  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.325628  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.325631  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.325634  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.325639  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.325644  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.325651  584371 system_pods.go:74] duration metric: took 3.806304739s to wait for pod list to return data ...
	I1008 19:13:10.325659  584371 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:10.328062  584371 default_sa.go:45] found service account: "default"
	I1008 19:13:10.328082  584371 default_sa.go:55] duration metric: took 2.41797ms for default service account to be created ...
	I1008 19:13:10.328089  584371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:10.332201  584371 system_pods.go:86] 8 kube-system pods found
	I1008 19:13:10.332224  584371 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.332229  584371 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.332233  584371 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.332237  584371 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.332241  584371 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.332245  584371 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.332250  584371 system_pods.go:89] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.332254  584371 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.332261  584371 system_pods.go:126] duration metric: took 4.167739ms to wait for k8s-apps to be running ...
	I1008 19:13:10.332270  584371 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:10.332313  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:10.350257  584371 system_svc.go:56] duration metric: took 17.979349ms WaitForService to wait for kubelet
	I1008 19:13:10.350288  584371 kubeadm.go:582] duration metric: took 4m21.445892386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:10.350310  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:10.352582  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:10.352598  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:10.352609  584371 node_conditions.go:105] duration metric: took 2.294326ms to run NodePressure ...
	I1008 19:13:10.352620  584371 start.go:241] waiting for startup goroutines ...
	I1008 19:13:10.352626  584371 start.go:246] waiting for cluster config update ...
	I1008 19:13:10.352636  584371 start.go:255] writing updated cluster config ...
	I1008 19:13:10.352882  584371 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:10.401998  584371 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:10.404037  584371 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 
	
	
	==> CRI-O <==
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.325261887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728414970325241509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f78d790a-b98d-4f22-87ce-d528d6b0855c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.325738978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5493922-e489-43b2-ac77-b36cda54144c name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.325788329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5493922-e489-43b2-ac77-b36cda54144c name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.325824655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f5493922-e489-43b2-ac77-b36cda54144c name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.357197602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=799e199e-a185-4828-a6b2-2ce5423eb526 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.357259724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=799e199e-a185-4828-a6b2-2ce5423eb526 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.358355247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a53ade81-b931-455d-a42e-b509f0269770 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.358823674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728414970358795477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a53ade81-b931-455d-a42e-b509f0269770 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.359428477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97c1bbb2-2817-4cf6-8c60-8d738833d948 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.359494669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97c1bbb2-2817-4cf6-8c60-8d738833d948 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.359533005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=97c1bbb2-2817-4cf6-8c60-8d738833d948 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.390072765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=721694d9-b3d7-4caf-8414-09fe8be1fbac name=/runtime.v1.RuntimeService/Version
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.390190870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=721694d9-b3d7-4caf-8414-09fe8be1fbac name=/runtime.v1.RuntimeService/Version
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.391150517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79614c71-c851-4d1d-b02c-1a4c526cdc2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.391547135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728414970391523612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79614c71-c851-4d1d-b02c-1a4c526cdc2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.392446189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ee6d194-e84d-4eb4-b337-047573a9c4e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.392515261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ee6d194-e84d-4eb4-b337-047573a9c4e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.392568155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7ee6d194-e84d-4eb4-b337-047573a9c4e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.422931661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44349831-09a1-470f-8794-42e27924c96b name=/runtime.v1.RuntimeService/Version
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.423002543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44349831-09a1-470f-8794-42e27924c96b name=/runtime.v1.RuntimeService/Version
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.424017621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=186e3369-af36-422e-a639-98ba0fb5d64e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.424431733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728414970424405801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=186e3369-af36-422e-a639-98ba0fb5d64e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.425067793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4caf5c7d-743e-4ea0-96d2-f0b351f5f080 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.425183282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4caf5c7d-743e-4ea0-96d2-f0b351f5f080 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:16:10 old-k8s-version-256554 crio[632]: time="2024-10-08 19:16:10.425222156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4caf5c7d-743e-4ea0-96d2-f0b351f5f080 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050416] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044675] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.049563] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.581000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586261] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 8 19:08] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.059019] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068335] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.205375] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.133900] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.277385] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.210273] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066679] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.142543] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +12.037421] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 8 19:12] systemd-fstab-generator[5070]: Ignoring "noauto" option for root device
	[Oct 8 19:14] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.062152] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:16:10 up 8 min,  0 users,  load average: 0.03, 0.07, 0.02
	Linux old-k8s-version-256554 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bb2f60, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c47590, 0x24, 0x60, 0x7fd99d1abbf0, 0x118, ...)
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: net/http.(*Transport).dial(0xc000628140, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c47590, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: net/http.(*Transport).dialConn(0xc000628140, 0x4f7fe00, 0xc000120018, 0x0, 0xc000344540, 0x5, 0xc000c47590, 0x24, 0x0, 0xc0008cd9e0, ...)
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: net/http.(*Transport).dialConnFor(0xc000628140, 0xc000c01340)
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: created by net/http.(*Transport).queueForDial
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: goroutine 168 [select]:
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c758c0, 0xc0008def00, 0xc000c7d0e0, 0xc000c7d080)
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]: created by net.(*netFD).connect
	Oct 08 19:16:07 old-k8s-version-256554 kubelet[5528]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Oct 08 19:16:07 old-k8s-version-256554 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 08 19:16:07 old-k8s-version-256554 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 08 19:16:08 old-k8s-version-256554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 08 19:16:08 old-k8s-version-256554 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 08 19:16:08 old-k8s-version-256554 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 08 19:16:08 old-k8s-version-256554 kubelet[5584]: I1008 19:16:08.407300    5584 server.go:416] Version: v1.20.0
	Oct 08 19:16:08 old-k8s-version-256554 kubelet[5584]: I1008 19:16:08.407587    5584 server.go:837] Client rotation is on, will bootstrap in background
	Oct 08 19:16:08 old-k8s-version-256554 kubelet[5584]: I1008 19:16:08.409503    5584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 08 19:16:08 old-k8s-version-256554 kubelet[5584]: W1008 19:16:08.410410    5584 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 08 19:16:08 old-k8s-version-256554 kubelet[5584]: I1008 19:16:08.410801    5584 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (238.072556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-256554" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (710.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783146 -n embed-certs-783146
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-08 19:21:25.934198085 +0000 UTC m=+6479.423350282
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-783146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-783146 logs -n 25: (2.008175038s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-038693 sudo                            | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                 | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:04:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:20.270613  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:23.342576  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:04:29.422638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:32.494606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:38.574600  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:41.646592  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:47.726606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:50.798595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:56.878669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:59.950607  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:06.030583  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:09.102584  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:15.182571  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:18.254590  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:24.334638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:27.406606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:33.486619  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:36.558552  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:42.638565  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:45.710610  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:51.790561  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:54.862591  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:00.942606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:04.014669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:10.094618  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:13.166598  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:19.246573  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:22.318595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:28.398732  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:31.470685  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:37.550574  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:40.622614  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:46.702620  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:49.774581  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:55.854627  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:58.926568  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:07:01.929445  585014 start.go:364] duration metric: took 3m15.782086174s to acquireMachinesLock for "embed-certs-783146"
	I1008 19:07:01.929517  585014 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:01.929523  585014 fix.go:54] fixHost starting: 
	I1008 19:07:01.929889  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:01.929945  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:01.945409  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 19:07:01.945858  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:01.946357  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:01.946387  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:01.946744  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:01.946895  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:01.947028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:01.948399  585014 fix.go:112] recreateIfNeeded on embed-certs-783146: state=Stopped err=<nil>
	I1008 19:07:01.948419  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	W1008 19:07:01.948545  585014 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:01.954020  585014 out.go:177] * Restarting existing kvm2 VM for "embed-certs-783146" ...
	I1008 19:07:01.926825  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:01.926871  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927219  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:07:01.927270  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927475  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:07:01.929278  584371 machine.go:96] duration metric: took 4m37.425232924s to provisionDockerMachine
	I1008 19:07:01.929341  584371 fix.go:56] duration metric: took 4m37.445578307s for fixHost
	I1008 19:07:01.929349  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 4m37.445609603s
	W1008 19:07:01.929369  584371 start.go:714] error starting host: provision: host is not running
	W1008 19:07:01.929510  584371 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1008 19:07:01.929524  584371 start.go:729] Will try again in 5 seconds ...
	I1008 19:07:01.955309  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Start
	I1008 19:07:01.955452  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 19:07:01.956122  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 19:07:01.956432  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 19:07:01.956743  585014 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 19:07:01.957427  585014 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 19:07:03.159229  585014 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 19:07:03.160116  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.160503  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.160565  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.160497  585935 retry.go:31] will retry after 282.873854ms: waiting for machine to come up
	I1008 19:07:03.445297  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.445810  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.445838  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.445740  585935 retry.go:31] will retry after 344.936527ms: waiting for machine to come up
	I1008 19:07:03.792413  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.792802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.792837  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.792741  585935 retry.go:31] will retry after 414.968289ms: waiting for machine to come up
	I1008 19:07:04.209200  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.209532  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.209555  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.209502  585935 retry.go:31] will retry after 403.180416ms: waiting for machine to come up
	I1008 19:07:04.614156  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.614679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.614713  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.614636  585935 retry.go:31] will retry after 631.841511ms: waiting for machine to come up
	I1008 19:07:05.248574  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.248983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.249015  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.248917  585935 retry.go:31] will retry after 639.776909ms: waiting for machine to come up
	I1008 19:07:05.890868  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.891332  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.891406  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.891329  585935 retry.go:31] will retry after 764.489176ms: waiting for machine to come up
	I1008 19:07:06.931497  584371 start.go:360] acquireMachinesLock for no-preload-966632: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:06.657130  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:06.657520  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:06.657550  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:06.657462  585935 retry.go:31] will retry after 1.348973281s: waiting for machine to come up
	I1008 19:07:08.008293  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:08.008779  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:08.008805  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:08.008740  585935 retry.go:31] will retry after 1.146283289s: waiting for machine to come up
	I1008 19:07:09.157106  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:09.157517  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:09.157546  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:09.157493  585935 retry.go:31] will retry after 1.510430686s: waiting for machine to come up
	I1008 19:07:10.669393  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:10.669802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:10.669831  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:10.669749  585935 retry.go:31] will retry after 2.380864418s: waiting for machine to come up
	I1008 19:07:13.053078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:13.053487  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:13.053512  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:13.053427  585935 retry.go:31] will retry after 2.553865951s: waiting for machine to come up
	I1008 19:07:15.610098  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:15.610501  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:15.610535  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:15.610428  585935 retry.go:31] will retry after 4.018444789s: waiting for machine to come up
	I1008 19:07:20.967039  585096 start.go:364] duration metric: took 3m30.476693248s to acquireMachinesLock for "default-k8s-diff-port-142496"
	I1008 19:07:20.967105  585096 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:20.967115  585096 fix.go:54] fixHost starting: 
	I1008 19:07:20.967619  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:20.967675  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:20.984936  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1008 19:07:20.985358  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:20.985869  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:07:20.985896  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:20.986199  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:20.986380  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:20.986520  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:07:20.987828  585096 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142496: state=Stopped err=<nil>
	I1008 19:07:20.987867  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	W1008 19:07:20.988020  585096 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:20.990029  585096 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142496" ...
	I1008 19:07:19.632076  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632468  585014 main.go:141] libmachine: (embed-certs-783146) Found IP for machine: 192.168.72.183
	I1008 19:07:19.632504  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has current primary IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632511  585014 main.go:141] libmachine: (embed-certs-783146) Reserving static IP address...
	I1008 19:07:19.632968  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.633020  585014 main.go:141] libmachine: (embed-certs-783146) DBG | skip adding static IP to network mk-embed-certs-783146 - found existing host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"}
	I1008 19:07:19.633041  585014 main.go:141] libmachine: (embed-certs-783146) Reserved static IP address: 192.168.72.183
	I1008 19:07:19.633062  585014 main.go:141] libmachine: (embed-certs-783146) Waiting for SSH to be available...
	I1008 19:07:19.633073  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Getting to WaitForSSH function...
	I1008 19:07:19.634939  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635221  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.635249  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635415  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH client type: external
	I1008 19:07:19.635453  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa (-rw-------)
	I1008 19:07:19.635496  585014 main.go:141] libmachine: (embed-certs-783146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:19.635509  585014 main.go:141] libmachine: (embed-certs-783146) DBG | About to run SSH command:
	I1008 19:07:19.635522  585014 main.go:141] libmachine: (embed-certs-783146) DBG | exit 0
	I1008 19:07:19.758276  585014 main.go:141] libmachine: (embed-certs-783146) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:19.758658  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 19:07:19.759310  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:19.761990  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.762456  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762803  585014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:07:19.763012  585014 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:19.763034  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:19.763271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.765523  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765829  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.765858  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765988  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.766159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766289  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766433  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.766589  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.766877  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.766891  585014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:19.866272  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:19.866297  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866563  585014 buildroot.go:166] provisioning hostname "embed-certs-783146"
	I1008 19:07:19.866585  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866799  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.869295  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869648  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.869679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869836  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.870017  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870153  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870293  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.870444  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.870621  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.870636  585014 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-783146 && echo "embed-certs-783146" | sudo tee /etc/hostname
	I1008 19:07:19.983892  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-783146
	
	I1008 19:07:19.983925  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.986430  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986776  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.986806  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986922  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.987104  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.987588  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.987746  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.987762  585014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-783146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-783146/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-783146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:20.095178  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:20.095212  585014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:20.095264  585014 buildroot.go:174] setting up certificates
	I1008 19:07:20.095276  585014 provision.go:84] configureAuth start
	I1008 19:07:20.095288  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:20.095578  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.098000  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098431  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.098459  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098591  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.100935  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101241  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.101271  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101393  585014 provision.go:143] copyHostCerts
	I1008 19:07:20.101452  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:20.101463  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:20.101544  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:20.101807  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:20.101824  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:20.101873  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:20.102015  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:20.102029  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:20.102075  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:20.102152  585014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-783146 san=[127.0.0.1 192.168.72.183 embed-certs-783146 localhost minikube]
	I1008 19:07:20.378020  585014 provision.go:177] copyRemoteCerts
	I1008 19:07:20.378093  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:20.378133  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.380678  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381017  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.381050  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381175  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.381386  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.381579  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.381717  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.464627  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:20.487853  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:07:20.510174  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:07:20.532381  585014 provision.go:87] duration metric: took 437.094502ms to configureAuth
	I1008 19:07:20.532405  585014 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:20.532571  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:20.532669  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.535064  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.535382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.535753  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.535920  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.536039  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.536193  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.536406  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.536429  585014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:20.745937  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:20.745967  585014 machine.go:96] duration metric: took 982.940955ms to provisionDockerMachine
	I1008 19:07:20.745980  585014 start.go:293] postStartSetup for "embed-certs-783146" (driver="kvm2")
	I1008 19:07:20.745994  585014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:20.746012  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.746380  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:20.746417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.749056  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749395  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.749425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749566  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.749738  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.749852  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.749943  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.828580  585014 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:20.832894  585014 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:20.832923  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:20.832994  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:20.833069  585014 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:20.833162  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:20.842230  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:20.864957  585014 start.go:296] duration metric: took 118.964089ms for postStartSetup
	I1008 19:07:20.865006  585014 fix.go:56] duration metric: took 18.93548189s for fixHost
	I1008 19:07:20.865029  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.867709  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868089  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.868113  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868223  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.868425  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868583  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868742  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.868926  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.869159  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.869175  585014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:20.966898  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414440.940275348
	
	I1008 19:07:20.966919  585014 fix.go:216] guest clock: 1728414440.940275348
	I1008 19:07:20.966926  585014 fix.go:229] Guest: 2024-10-08 19:07:20.940275348 +0000 UTC Remote: 2024-10-08 19:07:20.865011917 +0000 UTC m=+214.857488447 (delta=75.263431ms)
	I1008 19:07:20.966948  585014 fix.go:200] guest clock delta is within tolerance: 75.263431ms
	I1008 19:07:20.966953  585014 start.go:83] releasing machines lock for "embed-certs-783146", held for 19.037463535s
	I1008 19:07:20.966979  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.967246  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.969983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.970386  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970586  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971061  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971243  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971340  585014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:20.971382  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.971487  585014 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:20.971515  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.974211  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974581  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974632  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.974695  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974872  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.974999  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.975024  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.975028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975184  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975228  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.975374  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975501  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.975559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975709  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:21.072152  585014 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:21.078116  585014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:21.221176  585014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:21.227359  585014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:21.227434  585014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:21.242691  585014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:21.242716  585014 start.go:495] detecting cgroup driver to use...
	I1008 19:07:21.242796  585014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:21.257429  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:21.270208  585014 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:21.270258  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:21.282826  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:21.295827  585014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:21.405804  585014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:21.572147  585014 docker.go:233] disabling docker service ...
	I1008 19:07:21.572231  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:21.586083  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:21.598657  585014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:21.722224  585014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:21.853317  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:21.867234  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:21.884872  585014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:21.884949  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.895154  585014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:21.895223  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.905371  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.915602  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.926026  585014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:21.938089  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.949261  585014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.966211  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.978120  585014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:21.987631  585014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:21.987693  585014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:22.002185  585014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:22.013111  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:22.135933  585014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:22.230256  585014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:22.230342  585014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:22.235005  585014 start.go:563] Will wait 60s for crictl version
	I1008 19:07:22.235076  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:07:22.238991  585014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:22.279302  585014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:22.279391  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.308343  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.337272  585014 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:20.991759  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Start
	I1008 19:07:20.991997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring networks are active...
	I1008 19:07:20.992703  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network default is active
	I1008 19:07:20.993057  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network mk-default-k8s-diff-port-142496 is active
	I1008 19:07:20.993435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Getting domain xml...
	I1008 19:07:20.994209  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Creating domain...
	I1008 19:07:22.240185  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting to get IP...
	I1008 19:07:22.240949  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241417  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241469  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.241382  586083 retry.go:31] will retry after 234.248435ms: waiting for machine to come up
	I1008 19:07:22.476800  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477343  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477375  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.477275  586083 retry.go:31] will retry after 323.851452ms: waiting for machine to come up
	I1008 19:07:22.802997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803574  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803610  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.803516  586083 retry.go:31] will retry after 445.299956ms: waiting for machine to come up
	I1008 19:07:23.250211  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250686  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250715  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.250651  586083 retry.go:31] will retry after 574.786836ms: waiting for machine to come up
	I1008 19:07:23.827535  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828010  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.827959  586083 retry.go:31] will retry after 563.165045ms: waiting for machine to come up
	I1008 19:07:24.393150  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393741  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393792  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.393717  586083 retry.go:31] will retry after 576.443855ms: waiting for machine to come up
	I1008 19:07:24.971698  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972132  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972161  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.972090  586083 retry.go:31] will retry after 999.17904ms: waiting for machine to come up
	I1008 19:07:22.338812  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:22.341998  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:22.342417  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342680  585014 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:22.346863  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:22.359456  585014 kubeadm.go:883] updating cluster {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:22.359630  585014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:22.359692  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:22.394832  585014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:22.394893  585014 ssh_runner.go:195] Run: which lz4
	I1008 19:07:22.398935  585014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:22.403100  585014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:22.403127  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:23.771685  585014 crio.go:462] duration metric: took 1.372780034s to copy over tarball
	I1008 19:07:23.771769  585014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:25.816508  585014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044704362s)
	I1008 19:07:25.816547  585014 crio.go:469] duration metric: took 2.04482777s to extract the tarball
	I1008 19:07:25.816557  585014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:25.852980  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:25.893366  585014 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:25.893391  585014 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:25.893399  585014 kubeadm.go:934] updating node { 192.168.72.183 8443 v1.31.1 crio true true} ...
	I1008 19:07:25.893517  585014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-783146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:25.893579  585014 ssh_runner.go:195] Run: crio config
	I1008 19:07:25.934828  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:25.934850  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:25.934874  585014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:25.934906  585014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.183 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-783146 NodeName:embed-certs-783146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:25.935039  585014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-783146"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:25.935106  585014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:25.944851  585014 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:25.944919  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:25.954022  585014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1008 19:07:25.979675  585014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:26.001147  585014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1008 19:07:26.017613  585014 ssh_runner.go:195] Run: grep 192.168.72.183	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:26.021401  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:26.033347  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:25.972405  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972868  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972891  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:25.972831  586083 retry.go:31] will retry after 1.186801161s: waiting for machine to come up
	I1008 19:07:27.161319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161877  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161900  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:27.161823  586083 retry.go:31] will retry after 1.448383195s: waiting for machine to come up
	I1008 19:07:28.611319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611697  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:28.611613  586083 retry.go:31] will retry after 1.738948191s: waiting for machine to come up
	I1008 19:07:30.352081  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352582  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352617  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:30.352530  586083 retry.go:31] will retry after 2.624799898s: waiting for machine to come up
	I1008 19:07:26.138298  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:26.154419  585014 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146 for IP: 192.168.72.183
	I1008 19:07:26.154447  585014 certs.go:194] generating shared ca certs ...
	I1008 19:07:26.154470  585014 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:26.154651  585014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:26.154714  585014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:26.154729  585014 certs.go:256] generating profile certs ...
	I1008 19:07:26.154860  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/client.key
	I1008 19:07:26.154948  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key.b07aac04
	I1008 19:07:26.155003  585014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key
	I1008 19:07:26.155159  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:26.155202  585014 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:26.155212  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:26.155232  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:26.155256  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:26.155280  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:26.155319  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:26.156076  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:26.187225  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:26.235804  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:26.268034  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:26.292729  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 19:07:26.320118  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:26.351058  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:26.374004  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:07:26.396526  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:26.419067  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:26.441449  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:26.463768  585014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:26.479471  585014 ssh_runner.go:195] Run: openssl version
	I1008 19:07:26.484957  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:26.495286  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501166  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501225  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.507154  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:26.517587  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:26.528157  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532896  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532967  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.540724  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:26.554952  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:26.567160  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571304  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571394  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.576974  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:26.587198  585014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:26.591621  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:26.597176  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:26.602766  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:26.608373  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:26.613797  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:26.619310  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:26.624702  585014 kubeadm.go:392] StartCluster: {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:26.624831  585014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:26.624878  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.666183  585014 cri.go:89] found id: ""
	I1008 19:07:26.666253  585014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:26.676621  585014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:26.676644  585014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:26.676699  585014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:26.686549  585014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:26.687532  585014 kubeconfig.go:125] found "embed-certs-783146" server: "https://192.168.72.183:8443"
	I1008 19:07:26.689545  585014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:26.698758  585014 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.183
	I1008 19:07:26.698790  585014 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:26.698804  585014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:26.698856  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.738148  585014 cri.go:89] found id: ""
	I1008 19:07:26.738209  585014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:26.753980  585014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:26.763186  585014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:26.763208  585014 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:26.763257  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:07:26.771789  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:26.771847  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:26.780812  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:07:26.789329  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:26.789390  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:26.798230  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.806781  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:26.806842  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.815549  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:07:26.823782  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:26.823830  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:26.832698  585014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:26.841687  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:26.945569  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.159232  585014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213619978s)
	I1008 19:07:28.159280  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.372727  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.456082  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.567486  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:28.567627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.067909  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.568466  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.068627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.567821  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.604366  585014 api_server.go:72] duration metric: took 2.036885191s to wait for apiserver process to appear ...
	I1008 19:07:30.604403  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:30.604440  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.461223  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.461270  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.461286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.499425  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.499473  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.604563  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.614594  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:33.614625  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.105286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.111706  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:34.111747  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.605326  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.612912  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:07:34.619204  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:34.619227  585014 api_server.go:131] duration metric: took 4.014816798s to wait for apiserver health ...
	I1008 19:07:34.619236  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:34.619242  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:34.621043  585014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:32.980593  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981141  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981171  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:32.981076  586083 retry.go:31] will retry after 3.401015855s: waiting for machine to come up
	I1008 19:07:34.622500  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:34.632627  585014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:34.654975  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:34.667824  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:34.667853  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:34.667863  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:34.667874  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:34.667879  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:34.667884  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:34.667890  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:34.667899  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:34.667904  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:34.667910  585014 system_pods.go:74] duration metric: took 12.913884ms to wait for pod list to return data ...
	I1008 19:07:34.667919  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:34.672996  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:34.673018  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:34.673029  585014 node_conditions.go:105] duration metric: took 5.105827ms to run NodePressure ...
	I1008 19:07:34.673045  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:34.992309  585014 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996835  585014 kubeadm.go:739] kubelet initialised
	I1008 19:07:34.996861  585014 kubeadm.go:740] duration metric: took 4.524726ms waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996870  585014 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:35.005255  585014 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.012539  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012568  585014 pod_ready.go:82] duration metric: took 7.278613ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.012580  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012589  585014 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.018465  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018489  585014 pod_ready.go:82] duration metric: took 5.8848ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.018500  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018509  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.026503  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026533  585014 pod_ready.go:82] duration metric: took 8.012156ms for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.026544  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026555  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.058419  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058449  585014 pod_ready.go:82] duration metric: took 31.879605ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.058463  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058471  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.458244  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458275  585014 pod_ready.go:82] duration metric: took 399.794285ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.458286  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458292  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.858567  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858612  585014 pod_ready.go:82] duration metric: took 400.312425ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.858625  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858637  585014 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:36.258490  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258520  585014 pod_ready.go:82] duration metric: took 399.870797ms for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:36.258530  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258538  585014 pod_ready.go:39] duration metric: took 1.261659261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:36.258558  585014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:07:36.269993  585014 ops.go:34] apiserver oom_adj: -16
	I1008 19:07:36.270016  585014 kubeadm.go:597] duration metric: took 9.593365367s to restartPrimaryControlPlane
	I1008 19:07:36.270025  585014 kubeadm.go:394] duration metric: took 9.645330227s to StartCluster
	I1008 19:07:36.270044  585014 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.270125  585014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:07:36.271682  585014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.271945  585014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:07:36.272024  585014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:07:36.272130  585014 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-783146"
	I1008 19:07:36.272158  585014 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-783146"
	W1008 19:07:36.272166  585014 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:07:36.272152  585014 addons.go:69] Setting default-storageclass=true in profile "embed-certs-783146"
	I1008 19:07:36.272179  585014 addons.go:69] Setting metrics-server=true in profile "embed-certs-783146"
	I1008 19:07:36.272198  585014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-783146"
	I1008 19:07:36.272203  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272213  585014 addons.go:234] Setting addon metrics-server=true in "embed-certs-783146"
	W1008 19:07:36.272224  585014 addons.go:243] addon metrics-server should already be in state true
	I1008 19:07:36.272256  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272187  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:36.272616  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272638  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272658  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272689  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272694  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272738  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.274263  585014 out.go:177] * Verifying Kubernetes components...
	I1008 19:07:36.275444  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:36.288219  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1008 19:07:36.288686  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.289297  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.289328  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.289721  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.290415  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.290462  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.293043  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1008 19:07:36.293374  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I1008 19:07:36.293461  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293721  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293954  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.293978  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294188  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.294212  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294299  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294504  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.294534  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294982  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.295028  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.297638  585014 addons.go:234] Setting addon default-storageclass=true in "embed-certs-783146"
	W1008 19:07:36.297661  585014 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:07:36.297692  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.298042  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.298081  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.309286  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1008 19:07:36.309776  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310024  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1008 19:07:36.310337  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310360  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.310478  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310771  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.310980  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310997  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.311013  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.311330  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.311500  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.313004  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1008 19:07:36.313159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313368  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.313523  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313926  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.313951  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.314284  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.314777  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.314820  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.314992  585014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:07:36.315010  585014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:07:36.316168  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:07:36.316191  585014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:07:36.316212  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.316309  585014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.316333  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:07:36.316352  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.320088  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320418  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320566  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320591  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320733  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.320888  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320912  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320931  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321074  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.321181  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321235  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321400  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321397  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.321532  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.331532  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1008 19:07:36.331881  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.332309  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.332331  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.332724  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.332929  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.334589  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.334775  585014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.334797  585014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:07:36.334811  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.337675  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.338093  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338209  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.338380  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.338491  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.338600  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.444532  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:36.462719  585014 node_ready.go:35] waiting up to 6m0s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:36.519485  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.613714  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:07:36.613738  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:07:36.637773  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.645883  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:07:36.645907  585014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:07:36.685924  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.685952  585014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:07:36.710461  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.970231  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970256  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970563  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970589  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970599  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970606  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970860  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970881  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970892  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980520  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.980538  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.980826  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980869  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.980888  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.676577  585014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038767196s)
	I1008 19:07:37.676633  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.676646  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.676972  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.676982  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677040  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677058  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.677075  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.677333  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677351  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677375  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689600  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689615  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.689883  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.689897  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689901  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.689917  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689934  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.690210  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.690227  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.690240  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.690256  585014 addons.go:475] Verifying addon metrics-server=true in "embed-certs-783146"
	I1008 19:07:37.692035  585014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1008 19:07:36.383659  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.383993  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.384026  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:36.383939  586083 retry.go:31] will retry after 3.325274435s: waiting for machine to come up
	I1008 19:07:39.713420  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.713902  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Found IP for machine: 192.168.50.213
	I1008 19:07:39.713926  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserving static IP address...
	I1008 19:07:39.713945  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has current primary IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.714332  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.714362  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserved static IP address: 192.168.50.213
	I1008 19:07:39.714382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | skip adding static IP to network mk-default-k8s-diff-port-142496 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"}
	I1008 19:07:39.714401  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Getting to WaitForSSH function...
	I1008 19:07:39.714415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for SSH to be available...
	I1008 19:07:39.716542  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.716905  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.716951  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.717025  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH client type: external
	I1008 19:07:39.717052  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa (-rw-------)
	I1008 19:07:39.717111  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:39.717147  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | About to run SSH command:
	I1008 19:07:39.717165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | exit 0
	I1008 19:07:39.842089  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:39.842499  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetConfigRaw
	I1008 19:07:39.843125  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:39.845604  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.845976  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.846008  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.846276  585096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:07:39.846509  585096 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:39.846541  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:39.846768  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.849107  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849411  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.849435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849743  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.849924  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850084  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850236  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.850422  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.850679  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.850695  585096 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:39.950481  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:39.950507  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.950796  585096 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142496"
	I1008 19:07:39.950825  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.951016  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.953300  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.953678  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953833  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.954002  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954168  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954297  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.954450  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.954621  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.954636  585096 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142496 && echo "default-k8s-diff-port-142496" | sudo tee /etc/hostname
	I1008 19:07:40.068848  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142496
	
	I1008 19:07:40.068876  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.071855  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072195  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.072226  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072392  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.072563  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072746  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072871  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.073039  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.073237  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.073257  585096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:40.183039  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:40.183073  585096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:40.183116  585096 buildroot.go:174] setting up certificates
	I1008 19:07:40.183131  585096 provision.go:84] configureAuth start
	I1008 19:07:40.183146  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:40.183451  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:40.185904  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186264  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.186284  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186453  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.188672  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.189037  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189134  585096 provision.go:143] copyHostCerts
	I1008 19:07:40.189204  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:40.189217  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:40.189281  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:40.189427  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:40.189441  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:40.189474  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:40.189563  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:40.189573  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:40.189600  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:40.189679  585096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142496 san=[127.0.0.1 192.168.50.213 default-k8s-diff-port-142496 localhost minikube]
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:37.693230  585014 addons.go:510] duration metric: took 1.421218857s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1008 19:07:38.466754  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:40.967492  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:40.418969  585096 provision.go:177] copyRemoteCerts
	I1008 19:07:40.419032  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:40.419060  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.421382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421701  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.421730  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421912  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.422108  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.422287  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.422426  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.500533  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:40.524199  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 19:07:40.547495  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:07:40.570656  585096 provision.go:87] duration metric: took 387.509086ms to configureAuth
	I1008 19:07:40.570687  585096 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:40.570859  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:40.570934  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.573578  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.573941  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.573970  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.574088  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.574290  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574534  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574680  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.574881  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.575056  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.575074  585096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:40.795575  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:40.795604  585096 machine.go:96] duration metric: took 949.073836ms to provisionDockerMachine
	I1008 19:07:40.795618  585096 start.go:293] postStartSetup for "default-k8s-diff-port-142496" (driver="kvm2")
	I1008 19:07:40.795629  585096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:40.795646  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:40.796003  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:40.796042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.798307  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798635  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.798666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798881  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.799039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.799249  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.799369  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.880470  585096 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:40.884632  585096 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:40.884660  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:40.884719  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:40.884834  585096 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:40.884947  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:40.893828  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:40.917278  585096 start.go:296] duration metric: took 121.644332ms for postStartSetup
	I1008 19:07:40.917320  585096 fix.go:56] duration metric: took 19.950206082s for fixHost
	I1008 19:07:40.917342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.919971  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920315  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.920342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920539  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.920782  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.920969  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.921114  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.921292  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.921519  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.921535  585096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:41.022573  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414460.977520721
	
	I1008 19:07:41.022596  585096 fix.go:216] guest clock: 1728414460.977520721
	I1008 19:07:41.022603  585096 fix.go:229] Guest: 2024-10-08 19:07:40.977520721 +0000 UTC Remote: 2024-10-08 19:07:40.917324605 +0000 UTC m=+230.557951471 (delta=60.196116ms)
	I1008 19:07:41.022627  585096 fix.go:200] guest clock delta is within tolerance: 60.196116ms
	I1008 19:07:41.022634  585096 start.go:83] releasing machines lock for "default-k8s-diff-port-142496", held for 20.055558507s
	I1008 19:07:41.022665  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.022896  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:41.025861  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026272  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.026301  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026479  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027126  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027537  585096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:41.027581  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.027725  585096 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:41.027749  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.030474  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.030745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031094  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031123  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031148  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031322  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031511  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031572  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031827  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.031883  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.135368  585096 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:41.141492  585096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:41.288617  585096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:41.295482  585096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:41.295550  585096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:41.310709  585096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:41.310738  585096 start.go:495] detecting cgroup driver to use...
	I1008 19:07:41.310821  585096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:41.328574  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:41.342506  585096 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:41.342564  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:41.356308  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:41.372510  585096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:41.497084  585096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:41.665187  585096 docker.go:233] disabling docker service ...
	I1008 19:07:41.665272  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:41.682309  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:41.702567  585096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:41.882727  585096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:42.006479  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:42.020474  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:42.039750  585096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:42.039834  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.050395  585096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:42.050449  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.060572  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.071974  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.083208  585096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:42.097166  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.110090  585096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.128424  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.139296  585096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:42.148278  585096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:42.148320  585096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:42.164007  585096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:42.173218  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:42.303890  585096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:42.412074  585096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:42.412155  585096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:42.418606  585096 start.go:563] Will wait 60s for crictl version
	I1008 19:07:42.418662  585096 ssh_runner.go:195] Run: which crictl
	I1008 19:07:42.422670  585096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:42.469322  585096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:42.469432  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.501089  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.530412  585096 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:42.531554  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:42.534587  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.534928  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:42.534968  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.535235  585096 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:42.539279  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:42.552259  585096 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:42.552380  585096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:42.552447  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:42.588849  585096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:42.588928  585096 ssh_runner.go:195] Run: which lz4
	I1008 19:07:42.592785  585096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:42.597089  585096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:42.597119  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:44.003959  585096 crio.go:462] duration metric: took 1.411213503s to copy over tarball
	I1008 19:07:44.004075  585096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:43.467315  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:43.975147  585014 node_ready.go:49] node "embed-certs-783146" has status "Ready":"True"
	I1008 19:07:43.975180  585014 node_ready.go:38] duration metric: took 7.512429362s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:43.975194  585014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:43.982537  585014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999539  585014 pod_ready.go:93] pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:43.999566  585014 pod_ready.go:82] duration metric: took 16.995034ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999578  585014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506007  585014 pod_ready.go:93] pod "etcd-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:44.506032  585014 pod_ready.go:82] duration metric: took 506.447262ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506043  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.159478  585096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155358377s)
	I1008 19:07:46.159512  585096 crio.go:469] duration metric: took 2.155494994s to extract the tarball
	I1008 19:07:46.159532  585096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:46.196073  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:46.239224  585096 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:46.239253  585096 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:46.239263  585096 kubeadm.go:934] updating node { 192.168.50.213 8444 v1.31.1 crio true true} ...
	I1008 19:07:46.239412  585096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:46.239482  585096 ssh_runner.go:195] Run: crio config
	I1008 19:07:46.284916  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:46.284941  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:46.284959  585096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:46.284980  585096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142496 NodeName:default-k8s-diff-port-142496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:46.285145  585096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142496"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:46.285218  585096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:46.295176  585096 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:46.295278  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:46.304340  585096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1008 19:07:46.320234  585096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:46.336215  585096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 19:07:46.352435  585096 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:46.355991  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:46.367424  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:46.491070  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:46.509165  585096 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496 for IP: 192.168.50.213
	I1008 19:07:46.509192  585096 certs.go:194] generating shared ca certs ...
	I1008 19:07:46.509213  585096 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:46.509413  585096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:46.509488  585096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:46.509507  585096 certs.go:256] generating profile certs ...
	I1008 19:07:46.509642  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/client.key
	I1008 19:07:46.509724  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key.8b79a92b
	I1008 19:07:46.509806  585096 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key
	I1008 19:07:46.510014  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:46.510069  585096 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:46.510082  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:46.510109  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:46.510154  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:46.510177  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:46.510220  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:46.510965  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:46.548979  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:46.588042  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:46.617201  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:46.645499  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 19:07:46.673075  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:46.705336  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:46.727739  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:07:46.755352  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:46.782421  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:46.804813  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:46.827321  585096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:46.843375  585096 ssh_runner.go:195] Run: openssl version
	I1008 19:07:46.848936  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:46.860851  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865320  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865379  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.871107  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:46.881518  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:46.891868  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.895991  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.896026  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.901219  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:46.914282  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:46.925095  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929407  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929465  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.934778  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:46.946807  585096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:46.951173  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:46.957072  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:46.962822  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:46.968584  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:46.974679  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:46.980081  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:46.985537  585096 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:46.985659  585096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:46.985706  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.025838  585096 cri.go:89] found id: ""
	I1008 19:07:47.025924  585096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:47.037778  585096 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:47.037800  585096 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:47.037847  585096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:47.049787  585096 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:47.050778  585096 kubeconfig.go:125] found "default-k8s-diff-port-142496" server: "https://192.168.50.213:8444"
	I1008 19:07:47.052921  585096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:47.062696  585096 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I1008 19:07:47.062747  585096 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:47.062775  585096 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:47.062822  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.101981  585096 cri.go:89] found id: ""
	I1008 19:07:47.102054  585096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:47.119421  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:47.129168  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:47.129189  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:47.129253  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:07:47.138071  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:47.138125  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:47.147202  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:07:47.155923  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:47.155979  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:47.164829  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.173366  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:47.173413  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.182417  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:07:47.191170  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:47.191228  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:47.200115  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:47.209146  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:47.314572  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.318198  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003546788s)
	I1008 19:07:48.318245  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.533505  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.617977  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.743670  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:48.743782  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.244765  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.744287  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.243920  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:46.513648  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:49.013409  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:50.422334  585014 pod_ready.go:93] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.422364  585014 pod_ready.go:82] duration metric: took 5.916314463s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.422379  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929739  585014 pod_ready.go:93] pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.929775  585014 pod_ready.go:82] duration metric: took 507.386631ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929790  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935612  585014 pod_ready.go:93] pod "kube-proxy-9l7t7" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.935638  585014 pod_ready.go:82] duration metric: took 5.84081ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935650  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941106  585014 pod_ready.go:93] pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.941131  585014 pod_ready.go:82] duration metric: took 5.47259ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941143  585014 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:50.744524  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.830124  585096 api_server.go:72] duration metric: took 2.086461343s to wait for apiserver process to appear ...
	I1008 19:07:50.830161  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:50.830192  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:50.830915  585096 api_server.go:269] stopped: https://192.168.50.213:8444/healthz: Get "https://192.168.50.213:8444/healthz": dial tcp 192.168.50.213:8444: connect: connection refused
	I1008 19:07:51.331031  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.027442  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.027468  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.027483  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.101043  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.101073  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.330385  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.335009  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.335035  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:54.830407  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.835912  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.835939  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:55.330454  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:55.336271  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:07:55.343556  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:55.343586  585096 api_server.go:131] duration metric: took 4.513416619s to wait for apiserver health ...
	I1008 19:07:55.343604  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:55.343612  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:55.345259  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:55.346612  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:55.357899  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:55.383903  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:52.948407  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:55.449059  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:07:55.397903  585096 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:55.397935  585096 system_pods.go:61] "coredns-7c65d6cfc9-tkg8j" [0b436a1f-2b8e-4a5f-8063-695480275f2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:55.397944  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [cc702ae5-7e74-4a18-942e-1d236d39c43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:55.397952  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [da72d2f3-aab5-42c3-9733-7c0ce470e61e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:55.397959  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [de964717-b4de-4c7c-a9b5-164e7a048d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:55.397966  585096 system_pods.go:61] "kube-proxy-lwggr" [d5d96599-c3d3-4eba-a2ad-0c027e8ef1ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:55.397971  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [9218d69d-97ca-4680-856b-95c43fa371ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:55.397976  585096 system_pods.go:61] "metrics-server-6867b74b74-pfc2c" [9bafd6da-a33e-4182-a0d7-5e4c9473f057] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:55.397982  585096 system_pods.go:61] "storage-provisioner" [b60980ab-2552-404e-b351-4b163a075732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:55.397988  585096 system_pods.go:74] duration metric: took 14.056648ms to wait for pod list to return data ...
	I1008 19:07:55.397997  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:55.403870  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:55.403906  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:55.403920  585096 node_conditions.go:105] duration metric: took 5.917994ms to run NodePressure ...
	I1008 19:07:55.403941  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:55.677555  585096 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682514  585096 kubeadm.go:739] kubelet initialised
	I1008 19:07:55.682539  585096 kubeadm.go:740] duration metric: took 4.953783ms waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682550  585096 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:55.688641  585096 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:57.695361  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.195582  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:57.948167  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.446946  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.834691  584371 start.go:364] duration metric: took 54.903126319s to acquireMachinesLock for "no-preload-966632"
	I1008 19:08:01.834745  584371 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:08:01.834753  584371 fix.go:54] fixHost starting: 
	I1008 19:08:01.835158  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:01.835200  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:01.854850  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1008 19:08:01.855220  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:01.855740  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:01.855770  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:01.856201  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:01.856428  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:01.856587  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:01.857921  584371 fix.go:112] recreateIfNeeded on no-preload-966632: state=Stopped err=<nil>
	I1008 19:08:01.857943  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	W1008 19:08:01.858110  584371 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:08:01.859994  584371 out.go:177] * Restarting existing kvm2 VM for "no-preload-966632" ...
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:01.861355  584371 main.go:141] libmachine: (no-preload-966632) Calling .Start
	I1008 19:08:01.861539  584371 main.go:141] libmachine: (no-preload-966632) Ensuring networks are active...
	I1008 19:08:01.862455  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network default is active
	I1008 19:08:01.862878  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network mk-no-preload-966632 is active
	I1008 19:08:01.863368  584371 main.go:141] libmachine: (no-preload-966632) Getting domain xml...
	I1008 19:08:01.864106  584371 main.go:141] libmachine: (no-preload-966632) Creating domain...
	I1008 19:08:03.179854  584371 main.go:141] libmachine: (no-preload-966632) Waiting to get IP...
	I1008 19:08:03.180838  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.181232  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.181301  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.181206  586496 retry.go:31] will retry after 229.567854ms: waiting for machine to come up
	I1008 19:08:03.412710  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.413201  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.413225  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.413170  586496 retry.go:31] will retry after 361.675143ms: waiting for machine to come up
	I1008 19:08:03.776466  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.777140  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.777184  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.777047  586496 retry.go:31] will retry after 323.194852ms: waiting for machine to come up
	I1008 19:08:04.101865  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.102357  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.102388  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.102310  586496 retry.go:31] will retry after 484.995282ms: waiting for machine to come up
	I1008 19:08:02.698935  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:05.195930  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:02.447582  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:04.450889  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:04.589063  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.589752  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.589780  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.589706  586496 retry.go:31] will retry after 543.703113ms: waiting for machine to come up
	I1008 19:08:05.135522  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.135997  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.136023  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.135944  586496 retry.go:31] will retry after 617.479763ms: waiting for machine to come up
	I1008 19:08:05.754978  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.755541  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.755568  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.755486  586496 retry.go:31] will retry after 849.017716ms: waiting for machine to come up
	I1008 19:08:06.606621  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:06.607072  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:06.607105  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:06.607023  586496 retry.go:31] will retry after 1.133489837s: waiting for machine to come up
	I1008 19:08:07.742713  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:07.743299  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:07.743329  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:07.743252  586496 retry.go:31] will retry after 1.797316795s: waiting for machine to come up
	I1008 19:08:07.196317  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.698409  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.698443  585096 pod_ready.go:82] duration metric: took 12.009772792s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.698475  585096 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.708991  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.709015  585096 pod_ready.go:82] duration metric: took 10.527401ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.709028  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714343  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.714369  585096 pod_ready.go:82] duration metric: took 5.331417ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714383  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.118973  585096 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:06.948829  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:09.448376  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:09.541879  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:09.542340  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:09.542372  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:09.542288  586496 retry.go:31] will retry after 2.238590286s: waiting for machine to come up
	I1008 19:08:11.783440  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:11.783909  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:11.783945  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:11.783858  586496 retry.go:31] will retry after 2.226110801s: waiting for machine to come up
	I1008 19:08:14.012103  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:14.012538  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:14.012561  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:14.012493  586496 retry.go:31] will retry after 2.298206633s: waiting for machine to come up
	I1008 19:08:10.849833  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.849856  585096 pod_ready.go:82] duration metric: took 3.13546554s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.849868  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858341  585096 pod_ready.go:93] pod "kube-proxy-lwggr" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.858367  585096 pod_ready.go:82] duration metric: took 8.492572ms for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858379  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865890  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.865909  585096 pod_ready.go:82] duration metric: took 7.521945ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865918  585096 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:12.873861  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:15.372408  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.450482  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:13.948331  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.313868  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:16.314460  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:16.314484  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:16.314424  586496 retry.go:31] will retry after 3.672085858s: waiting for machine to come up
	I1008 19:08:17.872689  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.372637  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.448090  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:18.947580  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.948804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.989014  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989556  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has current primary IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989576  584371 main.go:141] libmachine: (no-preload-966632) Found IP for machine: 192.168.61.141
	I1008 19:08:19.989589  584371 main.go:141] libmachine: (no-preload-966632) Reserving static IP address...
	I1008 19:08:19.990000  584371 main.go:141] libmachine: (no-preload-966632) Reserved static IP address: 192.168.61.141
	I1008 19:08:19.990036  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.990048  584371 main.go:141] libmachine: (no-preload-966632) Waiting for SSH to be available...
	I1008 19:08:19.990068  584371 main.go:141] libmachine: (no-preload-966632) DBG | skip adding static IP to network mk-no-preload-966632 - found existing host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"}
	I1008 19:08:19.990076  584371 main.go:141] libmachine: (no-preload-966632) DBG | Getting to WaitForSSH function...
	I1008 19:08:19.992644  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.992970  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.993010  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.993081  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH client type: external
	I1008 19:08:19.993104  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa (-rw-------)
	I1008 19:08:19.993136  584371 main.go:141] libmachine: (no-preload-966632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:19.993152  584371 main.go:141] libmachine: (no-preload-966632) DBG | About to run SSH command:
	I1008 19:08:19.993174  584371 main.go:141] libmachine: (no-preload-966632) DBG | exit 0
	I1008 19:08:20.118205  584371 main.go:141] libmachine: (no-preload-966632) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:20.118616  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetConfigRaw
	I1008 19:08:20.119326  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.122203  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122678  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.122708  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122926  584371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 19:08:20.123144  584371 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:20.123164  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:20.123360  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.125759  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126083  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.126108  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126265  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.126442  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.126980  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.127189  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.127201  584371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:20.234458  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:20.234491  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.234781  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:08:20.234811  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.235044  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.237673  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.237993  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.238016  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.238221  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.238418  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238612  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238806  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.238981  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.239176  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.239203  584371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-966632 && echo "no-preload-966632" | sudo tee /etc/hostname
	I1008 19:08:20.360621  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-966632
	
	I1008 19:08:20.360649  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.363600  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.363909  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.363947  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.364166  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.364297  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364426  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364510  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.364630  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.364855  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.364881  584371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-966632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-966632/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-966632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:20.483101  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:20.483131  584371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:20.483149  584371 buildroot.go:174] setting up certificates
	I1008 19:08:20.483161  584371 provision.go:84] configureAuth start
	I1008 19:08:20.483171  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.483429  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.486467  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.486838  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.486871  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.487037  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.489207  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489531  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.489557  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489655  584371 provision.go:143] copyHostCerts
	I1008 19:08:20.489726  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:20.489737  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:20.489803  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:20.489927  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:20.489939  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:20.489987  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:20.490072  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:20.490083  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:20.490110  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:20.490231  584371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.no-preload-966632 san=[127.0.0.1 192.168.61.141 localhost minikube no-preload-966632]
	I1008 19:08:20.618050  584371 provision.go:177] copyRemoteCerts
	I1008 19:08:20.618117  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:20.618149  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.621118  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621458  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.621485  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621670  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.621875  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.622056  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.622224  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:20.704439  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:20.730441  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:08:20.755072  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:08:20.777513  584371 provision.go:87] duration metric: took 294.340685ms to configureAuth
	I1008 19:08:20.777550  584371 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:20.777774  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:08:20.777873  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.780540  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.780956  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.780995  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.781185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.781423  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781615  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.781989  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.782179  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.782203  584371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:21.003896  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:21.003925  584371 machine.go:96] duration metric: took 880.766243ms to provisionDockerMachine
	I1008 19:08:21.003940  584371 start.go:293] postStartSetup for "no-preload-966632" (driver="kvm2")
	I1008 19:08:21.003955  584371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:21.003974  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.004286  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:21.004312  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.007138  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007472  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.007500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007610  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.007820  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.007991  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.008163  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.093075  584371 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:21.097048  584371 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:21.097076  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:21.097160  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:21.097254  584371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:21.097370  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:21.106698  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:21.130484  584371 start.go:296] duration metric: took 126.530716ms for postStartSetup
	I1008 19:08:21.130526  584371 fix.go:56] duration metric: took 19.295774496s for fixHost
	I1008 19:08:21.130550  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.133361  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.133717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.133744  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.134048  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.134269  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134525  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134710  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.134888  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:21.135119  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:21.135135  584371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:21.242740  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414501.194174379
	
	I1008 19:08:21.242765  584371 fix.go:216] guest clock: 1728414501.194174379
	I1008 19:08:21.242776  584371 fix.go:229] Guest: 2024-10-08 19:08:21.194174379 +0000 UTC Remote: 2024-10-08 19:08:21.130530022 +0000 UTC m=+356.786912807 (delta=63.644357ms)
	I1008 19:08:21.242823  584371 fix.go:200] guest clock delta is within tolerance: 63.644357ms
	I1008 19:08:21.242835  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 19.408108613s
	I1008 19:08:21.242857  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.243112  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:21.245967  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246378  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.246409  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246731  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247314  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247500  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247588  584371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:21.247640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.247706  584371 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:21.247731  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.250191  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250228  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250665  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250694  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250729  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250789  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250948  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250962  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251129  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251314  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.251334  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251462  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.353600  584371 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:21.360031  584371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:21.502001  584371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:21.508846  584371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:21.508938  584371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:21.524597  584371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:21.524626  584371 start.go:495] detecting cgroup driver to use...
	I1008 19:08:21.524699  584371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:21.541500  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:21.553886  584371 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:21.553943  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:21.567027  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:21.579965  584371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:21.692823  584371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:21.844393  584371 docker.go:233] disabling docker service ...
	I1008 19:08:21.844461  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:21.860471  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:21.873229  584371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:22.003106  584371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:22.129301  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:22.143314  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:22.161423  584371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:08:22.161494  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.171355  584371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:22.171429  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.180962  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.190212  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.199737  584371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:22.209488  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.219051  584371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.235430  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.245007  584371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:22.253705  584371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:22.253748  584371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:22.265343  584371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:22.275245  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:22.380960  584371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:22.471004  584371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:22.471067  584371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:22.475520  584371 start.go:563] Will wait 60s for crictl version
	I1008 19:08:22.475598  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.479271  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:22.523709  584371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:22.523787  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.551307  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.579271  584371 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:08:22.580608  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:22.583417  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583783  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:22.583825  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583991  584371 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:22.587937  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:22.600324  584371 kubeadm.go:883] updating cluster {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:22.600465  584371 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:08:22.600506  584371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:22.641111  584371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:08:22.641139  584371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:22.641194  584371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.641224  584371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.641284  584371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.641307  584371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.641377  584371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.641407  584371 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 19:08:22.641742  584371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642057  584371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.642568  584371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.642576  584371 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 19:08:22.642669  584371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.642876  584371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.642894  584371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.643310  584371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.799972  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.811504  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.815340  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.815659  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.817303  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.858380  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.864688  584371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1008 19:08:22.864727  584371 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.864762  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.877332  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1008 19:08:22.934971  584371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1008 19:08:22.935035  584371 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.935085  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945549  584371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1008 19:08:22.945594  584371 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.945644  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945645  584371 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1008 19:08:22.945683  584371 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.945685  584371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1008 19:08:22.945730  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945733  584371 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.945796  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981887  584371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1008 19:08:22.982012  584371 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.982059  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981954  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.082208  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.082210  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.082304  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.082411  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.082430  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.082543  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.178344  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.196633  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.196665  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.196733  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.209763  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.209830  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.310142  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.317659  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.317731  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.327221  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.331490  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.346298  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 19:08:23.346412  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.435656  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 19:08:23.435679  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1008 19:08:23.435783  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:23.435788  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:23.441591  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 19:08:23.441673  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:23.441696  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 19:08:23.441782  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 19:08:23.441814  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:23.441856  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:23.441901  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1008 19:08:23.441918  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.441947  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.445597  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1008 19:08:23.445630  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1008 19:08:23.449022  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1008 19:08:23.450009  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.373452  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:24.872600  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:23.448074  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:25.449287  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.950280  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.508431356s)
	I1008 19:08:25.950340  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.508402491s)
	I1008 19:08:25.950344  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1008 19:08:25.950357  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1008 19:08:25.950545  584371 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.50050623s)
	I1008 19:08:25.950600  584371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1008 19:08:25.950611  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.508516442s)
	I1008 19:08:25.950637  584371 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:25.950648  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1008 19:08:25.950680  584371 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:25.950688  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:25.950727  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:29.225357  584371 ssh_runner.go:235] Completed: which crictl: (3.274648192s)
	I1008 19:08:29.225514  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:29.225532  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.27477814s)
	I1008 19:08:29.225561  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1008 19:08:29.225593  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:29.225627  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:27.373617  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.374173  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:27.948313  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.948750  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.696201  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.470655089s)
	I1008 19:08:30.696255  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.470604601s)
	I1008 19:08:30.696284  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1008 19:08:30.696296  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:30.696317  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.696365  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.740520  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:32.685896  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.989500601s)
	I1008 19:08:32.685941  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1008 19:08:32.685971  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.685971  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.945412846s)
	I1008 19:08:32.686046  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.686045  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 19:08:32.686186  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:31.872718  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:33.873665  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:32.447765  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:34.948257  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.663874  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.977781248s)
	I1008 19:08:34.663914  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1008 19:08:34.663939  584371 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:34.663942  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.977724244s)
	I1008 19:08:34.663973  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1008 19:08:34.663991  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:36.833283  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.169263327s)
	I1008 19:08:36.833320  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1008 19:08:36.833353  584371 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:36.833417  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:37.485901  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 19:08:37.485954  584371 cache_images.go:123] Successfully loaded all cached images
	I1008 19:08:37.485961  584371 cache_images.go:92] duration metric: took 14.844810749s to LoadCachedImages
	I1008 19:08:37.485973  584371 kubeadm.go:934] updating node { 192.168.61.141 8443 v1.31.1 crio true true} ...
	I1008 19:08:37.486084  584371 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-966632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:37.486149  584371 ssh_runner.go:195] Run: crio config
	I1008 19:08:37.544511  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:37.544535  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:37.544554  584371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:37.544576  584371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-966632 NodeName:no-preload-966632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:08:37.544718  584371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-966632"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:37.544792  584371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:08:37.556979  584371 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:37.557049  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:37.566249  584371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 19:08:37.583303  584371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:37.599535  584371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1008 19:08:37.616315  584371 ssh_runner.go:195] Run: grep 192.168.61.141	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:37.620089  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:37.632181  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:37.748647  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:37.765577  584371 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632 for IP: 192.168.61.141
	I1008 19:08:37.765600  584371 certs.go:194] generating shared ca certs ...
	I1008 19:08:37.765619  584371 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:37.765829  584371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:37.765890  584371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:37.765904  584371 certs.go:256] generating profile certs ...
	I1008 19:08:37.766020  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.key
	I1008 19:08:37.766095  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key.a515ed11
	I1008 19:08:37.766143  584371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key
	I1008 19:08:37.766334  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:37.766383  584371 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:37.766398  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:37.766430  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:37.766467  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:37.766501  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:37.766562  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:37.767588  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:37.804400  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:37.837466  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:37.865516  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:37.894827  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:08:37.918668  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:08:37.948238  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:37.974152  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:08:37.997284  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:38.019295  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:38.043392  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:38.067971  584371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:38.084940  584371 ssh_runner.go:195] Run: openssl version
	I1008 19:08:38.090779  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:38.102715  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107292  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107355  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.113456  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:38.123904  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:38.134337  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138503  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138561  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.143902  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:38.155393  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:38.167107  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171433  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171480  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.176968  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:38.188437  584371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:38.192733  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:38.198531  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:38.204187  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:38.210522  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:38.216328  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:38.222077  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:38.227724  584371 kubeadm.go:392] StartCluster: {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:38.227802  584371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:38.227882  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.262461  584371 cri.go:89] found id: ""
	I1008 19:08:38.262532  584371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:38.272591  584371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:38.272612  584371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:38.272677  584371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:38.282621  584371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:38.283683  584371 kubeconfig.go:125] found "no-preload-966632" server: "https://192.168.61.141:8443"
	I1008 19:08:38.286019  584371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:38.295315  584371 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.141
	I1008 19:08:38.295344  584371 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:38.295357  584371 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:38.295400  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.329462  584371 cri.go:89] found id: ""
	I1008 19:08:38.329533  584371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:38.345901  584371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:38.354899  584371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:38.354920  584371 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:38.354965  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:38.363242  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:38.363282  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:38.373063  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:38.381479  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:38.381530  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:38.390679  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.400033  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:38.400071  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.409308  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:38.417842  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:38.417876  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:38.427251  584371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:38.437010  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:38.562381  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.344247  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:36.372911  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:38.872768  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:37.448043  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:39.956579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.550458  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.619345  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.718016  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:39.718126  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.218974  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.719108  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.741178  584371 api_server.go:72] duration metric: took 1.023163924s to wait for apiserver process to appear ...
	I1008 19:08:40.741210  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:08:40.741235  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:40.741767  584371 api_server.go:269] stopped: https://192.168.61.141:8443/healthz: Get "https://192.168.61.141:8443/healthz": dial tcp 192.168.61.141:8443: connect: connection refused
	I1008 19:08:41.241356  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.787235  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:08:43.787284  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:08:43.787306  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.914606  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:43.914653  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:44.242033  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.247068  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.247097  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:40.873394  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:43.373475  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:42.446900  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:44.447141  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.742212  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.756340  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.756371  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.241997  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.246343  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.246367  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.741898  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.749274  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.749301  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.241889  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.246127  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.246155  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.741694  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.746192  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.746219  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:47.242250  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:47.246571  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:08:47.252812  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:08:47.252843  584371 api_server.go:131] duration metric: took 6.511626175s to wait for apiserver health ...
	I1008 19:08:47.252852  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:47.252858  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:47.254723  584371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:08:47.255933  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:08:47.266073  584371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:08:47.284042  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:08:47.293401  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:08:47.293432  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:08:47.293439  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:08:47.293450  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:08:47.293456  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:08:47.293464  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:08:47.293469  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:08:47.293474  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:08:47.293478  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:08:47.293484  584371 system_pods.go:74] duration metric: took 9.422158ms to wait for pod list to return data ...
	I1008 19:08:47.293493  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:08:47.296923  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:08:47.296947  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:08:47.296960  584371 node_conditions.go:105] duration metric: took 3.462212ms to run NodePressure ...
	I1008 19:08:47.296979  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:47.562271  584371 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566914  584371 kubeadm.go:739] kubelet initialised
	I1008 19:08:47.566938  584371 kubeadm.go:740] duration metric: took 4.63692ms waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566950  584371 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:47.571271  584371 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.575633  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575659  584371 pod_ready.go:82] duration metric: took 4.364181ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.575671  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575680  584371 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.579443  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579465  584371 pod_ready.go:82] duration metric: took 3.775248ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.579475  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579483  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.583747  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583775  584371 pod_ready.go:82] duration metric: took 4.277306ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.583785  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583797  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.687618  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687652  584371 pod_ready.go:82] duration metric: took 103.843425ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.687663  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687669  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.087568  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087601  584371 pod_ready.go:82] duration metric: took 399.92202ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.087613  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087622  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.487223  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487256  584371 pod_ready.go:82] duration metric: took 399.625038ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.487269  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487278  584371 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.887764  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887798  584371 pod_ready.go:82] duration metric: took 400.504473ms for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.887812  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887821  584371 pod_ready.go:39] duration metric: took 1.320859293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:48.887842  584371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:08:48.901255  584371 ops.go:34] apiserver oom_adj: -16
	I1008 19:08:48.901279  584371 kubeadm.go:597] duration metric: took 10.628659432s to restartPrimaryControlPlane
	I1008 19:08:48.901290  584371 kubeadm.go:394] duration metric: took 10.673572592s to StartCluster
	I1008 19:08:48.901313  584371 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.901397  584371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:48.904024  584371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.904361  584371 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:08:48.904455  584371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:08:48.904549  584371 addons.go:69] Setting storage-provisioner=true in profile "no-preload-966632"
	I1008 19:08:48.904565  584371 addons.go:69] Setting default-storageclass=true in profile "no-preload-966632"
	I1008 19:08:48.904594  584371 addons.go:234] Setting addon storage-provisioner=true in "no-preload-966632"
	W1008 19:08:48.904603  584371 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:08:48.904603  584371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-966632"
	I1008 19:08:48.904574  584371 addons.go:69] Setting metrics-server=true in profile "no-preload-966632"
	I1008 19:08:48.904646  584371 addons.go:234] Setting addon metrics-server=true in "no-preload-966632"
	I1008 19:08:48.904651  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.904652  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1008 19:08:48.904670  584371 addons.go:243] addon metrics-server should already be in state true
	I1008 19:08:48.904705  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.905079  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905116  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905133  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905151  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905159  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905205  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.906774  584371 out.go:177] * Verifying Kubernetes components...
	I1008 19:08:48.908138  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:48.942865  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1008 19:08:48.943612  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.944201  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.944232  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.944667  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.944748  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1008 19:08:48.945485  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.945526  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.945763  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.946464  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.946484  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.946530  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1008 19:08:48.946935  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.947052  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.947649  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.947693  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.948006  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.948027  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.948379  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.948602  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.951770  584371 addons.go:234] Setting addon default-storageclass=true in "no-preload-966632"
	W1008 19:08:48.951788  584371 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:08:48.951819  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.952055  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.952095  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.962422  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1008 19:08:48.962931  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.963509  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.963532  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.963908  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.964117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.965879  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.967812  584371 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:08:48.967853  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1008 19:08:48.967817  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1008 19:08:48.968376  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968436  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968885  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.968906  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.968964  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:08:48.968986  584371 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:08:48.969010  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.969290  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.969449  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.969472  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.969910  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.969941  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.970187  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.970430  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.972100  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972523  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.972544  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972677  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.972735  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.973016  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.973191  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.973323  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.974390  584371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:48.975651  584371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:48.975670  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:08:48.975686  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.978500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.978855  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.978876  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.979079  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.979474  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.979640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.979766  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.994846  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1008 19:08:48.995180  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.995592  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.995607  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.995976  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.996173  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.998270  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.998549  584371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:48.998568  584371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:08:48.998591  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:49.000647  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.000908  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:49.000924  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.001078  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:49.001185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:49.001282  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:49.001358  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:49.118217  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:49.138077  584371 node_ready.go:35] waiting up to 6m0s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:49.217300  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:49.241237  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:49.365395  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:08:49.365420  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:08:45.873500  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.373215  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:49.403596  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:08:49.403625  584371 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:08:49.438480  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:49.438540  584371 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:08:49.464366  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:50.474783  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.233506833s)
	I1008 19:08:50.474850  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474862  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.474914  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.257567473s)
	I1008 19:08:50.474955  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474964  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475191  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475206  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475215  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475221  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475280  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475289  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475297  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475303  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475310  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475441  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475454  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475582  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475596  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475628  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482003  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.482031  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.482315  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.482351  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482372  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.512902  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.048483922s)
	I1008 19:08:50.512957  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.512980  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513241  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513257  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513261  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513299  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.513307  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513534  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513552  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513561  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513577  584371 addons.go:475] Verifying addon metrics-server=true in "no-preload-966632"
	I1008 19:08:50.515302  584371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:08:46.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.448332  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:50.449239  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.516457  584371 addons.go:510] duration metric: took 1.612011936s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:08:51.141437  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:53.142166  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:54.141208  584371 node_ready.go:49] node "no-preload-966632" has status "Ready":"True"
	I1008 19:08:54.141238  584371 node_ready.go:38] duration metric: took 5.003121669s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:54.141251  584371 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:54.146685  584371 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151059  584371 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:54.151078  584371 pod_ready.go:82] duration metric: took 4.369406ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151086  584371 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:50.872416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:53.372230  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:52.947461  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:54.950183  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.157153  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.157458  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.658595  584371 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.658617  584371 pod_ready.go:82] duration metric: took 4.507524391s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.658627  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663785  584371 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.663811  584371 pod_ready.go:82] duration metric: took 5.176586ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663823  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668310  584371 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.668342  584371 pod_ready.go:82] duration metric: took 4.509914ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668356  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672380  584371 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.672397  584371 pod_ready.go:82] duration metric: took 4.034104ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672405  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676499  584371 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.676517  584371 pod_ready.go:82] duration metric: took 4.106343ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676527  584371 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:55.873069  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.372424  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:57.448182  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:59.947932  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.682583  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.682958  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:00.872650  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.872783  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:05.371825  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.447340  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:04.447504  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.683354  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.183362  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.183636  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.872083  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.874058  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.947502  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:08.948054  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.683833  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.188267  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:12.371967  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.372404  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.448009  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:13.948106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:15.948926  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:16.683453  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.184073  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:16.873511  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.372353  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:18.446864  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:20.449087  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:21.184209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.683535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:21.373869  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.872606  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:22.947224  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.947961  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:26.183587  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.184349  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:26.373049  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.873435  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.447254  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:29.447470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:30.683953  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.183412  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.371635  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.373244  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.448173  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.458099  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.947741  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:35.184447  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.682974  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.872226  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.872308  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:39.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:38.447494  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:40.947078  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:39.686907  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.183930  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.184443  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.372207  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.872360  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.947712  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:45.447011  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:46.682572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.683539  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.372618  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.373137  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.448127  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.947787  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:50.683784  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:53.183034  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.873417  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.373414  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.948470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.447675  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:55.184058  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.683581  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.872213  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.872323  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.447790  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.947901  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.948561  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:00.182341  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.183137  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:04.186590  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.872469  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.872708  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.372543  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.447536  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.947676  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.683572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.183605  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.373804  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.872253  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.947764  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.948705  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:11.184118  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:13.184173  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.372084  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.373845  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.447832  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.948882  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:15.682883  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.184458  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:16.374416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.872958  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:17.446820  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.448067  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:20.686987  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.182360  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.372104  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.872156  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.947427  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.950711  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:25.183749  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:27.184437  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.372132  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.372299  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.948117  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.948772  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:10:29.682860  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:31.683468  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:34.184018  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.872607  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:32.872784  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.373822  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:33.447573  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.447614  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:36.682470  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:38.683099  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.872270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.872490  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.450208  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.947418  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:40.683975  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.182280  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.372711  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.873233  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.446673  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.447301  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:45.188927  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.682373  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.371936  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:49.372190  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.447866  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:48.947579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.683659  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.182955  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:54.184535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.373113  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.373239  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.446974  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.447804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.947655  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:56.682535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.683610  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.872286  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:57.872418  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:59.872720  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.447354  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:00.447456  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:00.684164  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:03.182802  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.873903  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.372767  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:02.947088  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.948196  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:05.683195  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.184305  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:06.872641  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.872811  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.447814  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:09.948377  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:10.186012  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.682829  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:11.374597  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:13.872241  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.447966  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.448396  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.683559  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.185276  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.372851  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:18.872479  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.947677  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:19.449701  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:19.682652  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.683209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.684681  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.371629  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.373290  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.947319  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:24.446685  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.183070  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.185526  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:25.872866  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.371245  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.371878  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.447559  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.948355  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.949170  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:30.683714  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.182628  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.373119  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:34.872336  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.447847  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:35.947756  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:35.183021  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.682254  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.371644  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:39.372270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.447549  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:40.946928  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.182928  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.873545  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.372751  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.947699  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.949106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:44.684067  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.182248  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.183397  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:46.871983  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:48.872101  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.447284  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.448275  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.949171  585014 pod_ready.go:82] duration metric: took 4m0.008012606s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:11:50.949202  585014 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:11:50.949213  585014 pod_ready.go:39] duration metric: took 4m6.974004451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:11:50.949249  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:11:50.949290  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.949351  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.998560  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:50.998584  585014 cri.go:89] found id: ""
	I1008 19:11:50.998591  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:11:50.998649  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.003407  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:51.003490  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:51.183611  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:53.683418  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.872692  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:52.873320  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:55.372722  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:51.049315  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:51.049343  585014 cri.go:89] found id: ""
	I1008 19:11:51.049353  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:11:51.049411  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.055212  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:51.055281  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:51.101271  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.101292  585014 cri.go:89] found id: ""
	I1008 19:11:51.101300  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:11:51.101360  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.105902  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:51.105966  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:51.150355  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.150390  585014 cri.go:89] found id: ""
	I1008 19:11:51.150402  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:11:51.150468  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.155116  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:51.155193  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:51.197754  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:51.197779  585014 cri.go:89] found id: ""
	I1008 19:11:51.197790  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:11:51.197846  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.201957  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:51.202023  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:51.239982  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:51.240001  585014 cri.go:89] found id: ""
	I1008 19:11:51.240009  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:11:51.240064  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.244580  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:51.244645  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:51.280099  585014 cri.go:89] found id: ""
	I1008 19:11:51.280126  585014 logs.go:282] 0 containers: []
	W1008 19:11:51.280137  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:51.280144  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:11:51.280205  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:11:51.323467  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:51.323508  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:51.323514  585014 cri.go:89] found id: ""
	I1008 19:11:51.323525  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:11:51.323676  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.328091  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.332113  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:51.332139  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:11:51.455430  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:11:51.455463  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.492792  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:11:51.492824  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.533732  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.533768  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:52.085919  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:11:52.085972  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:52.120874  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:52.120912  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:11:52.163961  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164188  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.164330  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164489  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.195681  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:52.195716  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:52.210569  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:11:52.210601  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:52.256667  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:11:52.256700  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:52.303627  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:11:52.303685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:52.340250  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:11:52.340279  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:52.402179  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:11:52.402213  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:52.440288  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:11:52.440326  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:52.478952  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.478979  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:11:52.479043  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:11:52.479060  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479068  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.479077  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479084  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.479092  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.479101  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.182942  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:58.184307  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:57.373902  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:59.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:00.683230  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:03.184294  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.372439  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:04.871327  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.480085  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.498401  585014 api_server.go:72] duration metric: took 4m26.226421652s to wait for apiserver process to appear ...
	I1008 19:12:02.498433  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:12:02.498479  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.498544  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:02.533531  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:02.533563  585014 cri.go:89] found id: ""
	I1008 19:12:02.533575  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:02.533643  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.537914  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:02.537985  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:02.579011  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:02.579039  585014 cri.go:89] found id: ""
	I1008 19:12:02.579049  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:02.579111  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.583628  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:02.583695  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:02.625038  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.625065  585014 cri.go:89] found id: ""
	I1008 19:12:02.625075  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:02.625138  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.629262  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:02.629331  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:02.662964  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:02.662988  585014 cri.go:89] found id: ""
	I1008 19:12:02.662997  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:02.663052  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.666955  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:02.667013  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:02.704552  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:02.704578  585014 cri.go:89] found id: ""
	I1008 19:12:02.704589  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:02.704640  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.708910  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:02.708962  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:02.743196  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.743220  585014 cri.go:89] found id: ""
	I1008 19:12:02.743229  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:02.743276  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.747488  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:02.747563  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:02.789367  585014 cri.go:89] found id: ""
	I1008 19:12:02.789405  585014 logs.go:282] 0 containers: []
	W1008 19:12:02.789418  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:02.789426  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:02.789479  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:02.828607  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:02.828640  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.828646  585014 cri.go:89] found id: ""
	I1008 19:12:02.828656  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:02.828723  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.832981  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.837258  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:02.837284  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.874214  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:02.874249  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.925844  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:02.925879  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.963715  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:02.963744  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.009069  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.009102  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:03.046628  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.046816  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.046947  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.047129  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.080027  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.080068  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:03.203192  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:03.203233  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:03.254645  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:03.254681  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:03.300881  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:03.300918  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:03.347403  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.347440  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.802754  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.802801  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.816658  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:03.816695  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:03.873630  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:03.873670  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:03.910834  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.910862  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:03.910932  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:03.910946  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910955  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.910972  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910983  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.910994  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.911006  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:05.682916  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:07.683273  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:06.872011  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:08.873682  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:09.684055  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.182786  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:14.186868  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:10.866893  585096 pod_ready.go:82] duration metric: took 4m0.000956825s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:10.866937  585096 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1008 19:12:10.866961  585096 pod_ready.go:39] duration metric: took 4m15.184400794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:10.866992  585096 kubeadm.go:597] duration metric: took 4m23.829186185s to restartPrimaryControlPlane
	W1008 19:12:10.867049  585096 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:10.867092  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:13.911944  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:12:13.917530  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:12:13.918513  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:12:13.918537  585014 api_server.go:131] duration metric: took 11.420096691s to wait for apiserver health ...
	I1008 19:12:13.918546  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:12:13.918570  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:13.918621  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:13.957026  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:13.957048  585014 cri.go:89] found id: ""
	I1008 19:12:13.957057  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:13.957114  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:13.961553  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:13.961611  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:13.996466  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:13.996497  585014 cri.go:89] found id: ""
	I1008 19:12:13.996508  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:13.996570  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.000972  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:14.001036  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:14.034888  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.034917  585014 cri.go:89] found id: ""
	I1008 19:12:14.034929  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:14.034989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.039145  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:14.039216  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:14.074109  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:14.074134  585014 cri.go:89] found id: ""
	I1008 19:12:14.074145  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:14.074202  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.078291  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:14.078371  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:14.113375  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:14.113403  585014 cri.go:89] found id: ""
	I1008 19:12:14.113413  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:14.113475  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.117909  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:14.118002  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:14.153800  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:14.153823  585014 cri.go:89] found id: ""
	I1008 19:12:14.153833  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:14.153898  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.158233  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:14.158302  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:14.195093  585014 cri.go:89] found id: ""
	I1008 19:12:14.195123  585014 logs.go:282] 0 containers: []
	W1008 19:12:14.195133  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:14.195142  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:14.195203  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:14.230894  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:14.230917  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:14.230921  585014 cri.go:89] found id: ""
	I1008 19:12:14.230929  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:14.230989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.236299  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.240914  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:14.240940  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:14.282289  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282488  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:14.282643  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282824  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:14.315207  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:14.315235  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:14.433616  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:14.433647  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:14.482640  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:14.482685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.524749  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:14.524788  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:14.979562  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:14.979629  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:15.016898  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:15.016941  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:15.058447  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:15.058478  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:15.114345  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:15.114384  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:15.128920  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:15.128948  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:15.176775  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:15.176817  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:15.215091  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:15.215129  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:15.256687  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:15.256731  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:15.311551  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311583  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:15.311641  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:15.311653  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311664  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:15.311676  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311681  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:15.311687  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311695  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:16.682464  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:19.182595  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:21.184074  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:23.682867  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:25.319305  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:12:25.319336  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.319340  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.319344  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.319348  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.319351  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.319354  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.319362  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.319365  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.319371  585014 system_pods.go:74] duration metric: took 11.400819931s to wait for pod list to return data ...
	I1008 19:12:25.319378  585014 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:12:25.322115  585014 default_sa.go:45] found service account: "default"
	I1008 19:12:25.322135  585014 default_sa.go:55] duration metric: took 2.751457ms for default service account to be created ...
	I1008 19:12:25.322143  585014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:12:25.326570  585014 system_pods.go:86] 8 kube-system pods found
	I1008 19:12:25.326590  585014 system_pods.go:89] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.326595  585014 system_pods.go:89] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.326599  585014 system_pods.go:89] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.326604  585014 system_pods.go:89] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.326610  585014 system_pods.go:89] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.326615  585014 system_pods.go:89] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.326625  585014 system_pods.go:89] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.326633  585014 system_pods.go:89] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.326642  585014 system_pods.go:126] duration metric: took 4.494323ms to wait for k8s-apps to be running ...
	I1008 19:12:25.326651  585014 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:12:25.326701  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:25.344597  585014 system_svc.go:56] duration metric: took 17.941012ms WaitForService to wait for kubelet
	I1008 19:12:25.344621  585014 kubeadm.go:582] duration metric: took 4m49.072648847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:12:25.344638  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:12:25.347385  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:12:25.347404  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:12:25.347425  585014 node_conditions.go:105] duration metric: took 2.783181ms to run NodePressure ...
	I1008 19:12:25.347437  585014 start.go:241] waiting for startup goroutines ...
	I1008 19:12:25.347450  585014 start.go:246] waiting for cluster config update ...
	I1008 19:12:25.347463  585014 start.go:255] writing updated cluster config ...
	I1008 19:12:25.347823  585014 ssh_runner.go:195] Run: rm -f paused
	I1008 19:12:25.395903  585014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:12:25.397911  585014 out.go:177] * Done! kubectl is now configured to use "embed-certs-783146" cluster and "default" namespace by default
	I1008 19:12:25.683645  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:28.182995  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:30.183567  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:32.682881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.013046  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.145916528s)
	I1008 19:12:37.013156  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:37.028010  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:37.037493  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:37.046435  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:37.046455  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:37.046495  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:12:37.055422  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:37.055482  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:37.064538  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:12:37.072968  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:37.073021  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:37.081754  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.090143  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:37.090179  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.098726  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:12:37.107261  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:37.107308  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:37.115975  585096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:37.163570  585096 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:12:37.163642  585096 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:37.272891  585096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:37.273025  585096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:37.273151  585096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:12:37.284204  585096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:37.286084  585096 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:37.286175  585096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:37.286263  585096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:37.286385  585096 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:37.286443  585096 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:37.286545  585096 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:37.286638  585096 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:37.286729  585096 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:37.286812  585096 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:37.286912  585096 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:37.287010  585096 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:37.287082  585096 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:37.287172  585096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:37.602946  585096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:37.727897  585096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:12:37.932126  585096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:37.989742  585096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:38.036655  585096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:38.037085  585096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:38.040618  585096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:35.182881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.683718  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:38.042238  585096 out.go:235]   - Booting up control plane ...
	I1008 19:12:38.042374  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:38.042568  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:38.043504  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:38.065666  585096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:38.071727  585096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:38.071814  585096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:38.210382  585096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:12:38.210516  585096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:12:39.213697  585096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003319891s
	I1008 19:12:39.213803  585096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:12:43.717718  585096 kubeadm.go:310] [api-check] The API server is healthy after 4.502167036s
	I1008 19:12:43.728628  585096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:12:43.744283  585096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:12:43.775369  585096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:12:43.775621  585096 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-142496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:12:43.788583  585096 kubeadm.go:310] [bootstrap-token] Using token: srsq4v.7le212xun40ljc7w
	I1008 19:12:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:42.183680  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:44.185065  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:43.789834  585096 out.go:235]   - Configuring RBAC rules ...
	I1008 19:12:43.789945  585096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:12:43.796091  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:12:43.807906  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:12:43.811025  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:12:43.814445  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:12:43.817615  585096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:12:44.122839  585096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:12:44.567387  585096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:12:45.122714  585096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:12:45.123480  585096 kubeadm.go:310] 
	I1008 19:12:45.123590  585096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:12:45.123617  585096 kubeadm.go:310] 
	I1008 19:12:45.123740  585096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:12:45.123749  585096 kubeadm.go:310] 
	I1008 19:12:45.123789  585096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:12:45.123870  585096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:12:45.123958  585096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:12:45.123984  585096 kubeadm.go:310] 
	I1008 19:12:45.124064  585096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:12:45.124080  585096 kubeadm.go:310] 
	I1008 19:12:45.124152  585096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:12:45.124162  585096 kubeadm.go:310] 
	I1008 19:12:45.124248  585096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:12:45.124366  585096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:12:45.124456  585096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:12:45.124469  585096 kubeadm.go:310] 
	I1008 19:12:45.124579  585096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:12:45.124682  585096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:12:45.124692  585096 kubeadm.go:310] 
	I1008 19:12:45.124804  585096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.124926  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:12:45.124953  585096 kubeadm.go:310] 	--control-plane 
	I1008 19:12:45.124958  585096 kubeadm.go:310] 
	I1008 19:12:45.125086  585096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:12:45.125093  585096 kubeadm.go:310] 
	I1008 19:12:45.125182  585096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.125321  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:12:45.126852  585096 kubeadm.go:310] W1008 19:12:37.105673    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127231  585096 kubeadm.go:310] W1008 19:12:37.106373    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127380  585096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:12:45.127429  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:12:45.127452  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:12:45.129742  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:12:45.130870  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:12:45.143909  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:12:45.170901  585096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:12:45.170965  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:45.170972  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-142496 minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=default-k8s-diff-port-142496 minikube.k8s.io/primary=true
	I1008 19:12:45.198031  585096 ops.go:34] apiserver oom_adj: -16
	I1008 19:12:45.385789  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.684251  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:49.183225  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:45.886434  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.386165  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.886920  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.386786  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.885835  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.386706  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.885981  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.386856  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.471554  585096 kubeadm.go:1113] duration metric: took 4.300656747s to wait for elevateKubeSystemPrivileges
	I1008 19:12:49.471596  585096 kubeadm.go:394] duration metric: took 5m2.486064826s to StartCluster
	I1008 19:12:49.471627  585096 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.471736  585096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:12:49.473381  585096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.473676  585096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:12:49.473768  585096 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:12:49.473874  585096 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473897  585096 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142496"
	I1008 19:12:49.473899  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:12:49.473904  585096 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473923  585096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142496"
	W1008 19:12:49.473907  585096 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:12:49.473939  585096 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473955  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.473967  585096 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.473981  585096 addons.go:243] addon metrics-server should already be in state true
	I1008 19:12:49.474022  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.474283  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474313  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474338  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474366  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474373  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474405  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.475217  585096 out.go:177] * Verifying Kubernetes components...
	I1008 19:12:49.476402  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:12:49.490880  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1008 19:12:49.491405  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.492070  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.492093  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.492454  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.492990  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.493040  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.493623  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 19:12:49.493646  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1008 19:12:49.494011  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494067  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494548  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494565  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494763  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494790  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494930  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495102  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.495276  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495871  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.495908  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.498744  585096 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.498764  585096 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:12:49.498787  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.499142  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.499173  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.514047  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1008 19:12:49.514527  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.515028  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.515046  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.515493  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.515662  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.516519  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1008 19:12:49.517015  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.517643  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.517661  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.517706  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.517757  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I1008 19:12:49.518133  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.518458  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.518617  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.518643  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.518681  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.519107  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.519527  585096 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:12:49.519808  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.519923  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.520415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.520624  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:12:49.520644  585096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:12:49.520669  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.522226  585096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:12:49.523372  585096 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.523396  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:12:49.523415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.523947  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524437  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.524464  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.524830  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.525042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.525198  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.527349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.527693  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527842  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.528009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.528186  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.528325  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.536509  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I1008 19:12:49.536879  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.537341  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.537359  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.537606  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.537897  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.539570  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.539810  585096 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.539831  585096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:12:49.539848  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.542955  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.543522  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543543  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.543726  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.543888  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.544023  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.721845  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:12:49.741622  585096 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.763968  585096 node_ready.go:49] node "default-k8s-diff-port-142496" has status "Ready":"True"
	I1008 19:12:49.764005  585096 node_ready.go:38] duration metric: took 22.348135ms for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.764019  585096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:49.793150  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:49.867565  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.904041  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.912694  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:12:49.912723  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:12:49.962053  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:12:49.962082  585096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:12:50.004678  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.004709  585096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:12:50.068528  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.394807  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394824  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394836  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.394841  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395140  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395161  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395172  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395181  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395181  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395195  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395201  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.395205  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395262  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395425  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395439  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395616  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395668  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395643  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416509  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.416532  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.416815  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416865  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.416880  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634404  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634428  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634722  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.634744  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634752  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634761  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634769  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635036  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635066  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.635079  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.635100  585096 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-142496"
	I1008 19:12:50.636555  585096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:12:51.683959  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.182376  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:50.637816  585096 addons.go:510] duration metric: took 1.164063633s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:12:51.799881  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.299619  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:56.183179  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683102  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683159  584371 pod_ready.go:82] duration metric: took 4m0.006623922s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:58.683173  584371 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:12:58.683184  584371 pod_ready.go:39] duration metric: took 4m4.541923995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:58.683207  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:12:58.683245  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:58.683296  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:58.729385  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:58.729407  584371 cri.go:89] found id: ""
	I1008 19:12:58.729417  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:12:58.729472  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.734291  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:58.734382  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:58.772015  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:12:58.772050  584371 cri.go:89] found id: ""
	I1008 19:12:58.772062  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:12:58.772123  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.776231  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:58.776300  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:58.812962  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:58.812982  584371 cri.go:89] found id: ""
	I1008 19:12:58.812991  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:12:58.813046  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.816951  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:58.817002  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:58.852918  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:58.852939  584371 cri.go:89] found id: ""
	I1008 19:12:58.852946  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:12:58.852992  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.857184  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:58.857245  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:58.895233  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:12:58.895254  584371 cri.go:89] found id: ""
	I1008 19:12:58.895264  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:12:58.895317  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.899301  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:58.899354  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:58.933918  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:58.933946  584371 cri.go:89] found id: ""
	I1008 19:12:58.933956  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:12:58.934003  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.938274  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:58.938361  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:58.980067  584371 cri.go:89] found id: ""
	I1008 19:12:58.980094  584371 logs.go:282] 0 containers: []
	W1008 19:12:58.980104  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:58.980113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:58.980174  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:59.013783  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:12:59.013812  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.013817  584371 cri.go:89] found id: ""
	I1008 19:12:59.013827  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:12:59.013886  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.018420  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.024462  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:12:59.024486  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.062654  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:12:59.062688  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:59.110932  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:59.110966  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:59.248699  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:12:59.248734  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:59.294439  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:12:59.294473  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:59.331208  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:12:59.331241  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:59.374242  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:12:59.374283  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:56.799487  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.800290  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:59.800320  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.800349  585096 pod_ready.go:82] duration metric: took 10.007162242s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.800361  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804590  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.804609  585096 pod_ready.go:82] duration metric: took 4.240474ms for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804620  585096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808737  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.808754  585096 pod_ready.go:82] duration metric: took 4.127686ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808762  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813126  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.813146  585096 pod_ready.go:82] duration metric: took 4.37796ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813154  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817020  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.817039  585096 pod_ready.go:82] duration metric: took 3.878053ms for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817048  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197958  585096 pod_ready.go:93] pod "kube-proxy-wd5kv" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.197983  585096 pod_ready.go:82] duration metric: took 380.928087ms for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197992  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597495  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.597521  585096 pod_ready.go:82] duration metric: took 399.522182ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597529  585096 pod_ready.go:39] duration metric: took 10.833495765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:13:00.597545  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:13:00.597612  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:00.613266  585096 api_server.go:72] duration metric: took 11.139554705s to wait for apiserver process to appear ...
	I1008 19:13:00.613289  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:00.613308  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:13:00.618420  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:13:00.619376  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:00.619399  585096 api_server.go:131] duration metric: took 6.102941ms to wait for apiserver health ...
	I1008 19:13:00.619407  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:00.800687  585096 system_pods.go:59] 9 kube-system pods found
	I1008 19:13:00.800720  585096 system_pods.go:61] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:00.800729  585096 system_pods.go:61] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:00.800733  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:00.800737  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:00.800740  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:00.800743  585096 system_pods.go:61] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:00.800747  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:00.800752  585096 system_pods.go:61] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:00.800755  585096 system_pods.go:61] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:00.800765  585096 system_pods.go:74] duration metric: took 181.352111ms to wait for pod list to return data ...
	I1008 19:13:00.800773  585096 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:00.997631  585096 default_sa.go:45] found service account: "default"
	I1008 19:13:00.997657  585096 default_sa.go:55] duration metric: took 196.876434ms for default service account to be created ...
	I1008 19:13:00.997667  585096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:01.199366  585096 system_pods.go:86] 9 kube-system pods found
	I1008 19:13:01.199396  585096 system_pods.go:89] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:01.199402  585096 system_pods.go:89] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:01.199406  585096 system_pods.go:89] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:01.199409  585096 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:01.199413  585096 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:01.199416  585096 system_pods.go:89] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:01.199419  585096 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:01.199426  585096 system_pods.go:89] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:01.199430  585096 system_pods.go:89] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:01.199439  585096 system_pods.go:126] duration metric: took 201.766214ms to wait for k8s-apps to be running ...
	I1008 19:13:01.199447  585096 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:01.199492  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:01.214863  585096 system_svc.go:56] duration metric: took 15.401989ms WaitForService to wait for kubelet
	I1008 19:13:01.214895  585096 kubeadm.go:582] duration metric: took 11.741185862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:01.214919  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:01.397506  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:01.397530  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:01.397541  585096 node_conditions.go:105] duration metric: took 182.616774ms to run NodePressure ...
	I1008 19:13:01.397553  585096 start.go:241] waiting for startup goroutines ...
	I1008 19:13:01.397560  585096 start.go:246] waiting for cluster config update ...
	I1008 19:13:01.397570  585096 start.go:255] writing updated cluster config ...
	I1008 19:13:01.397828  585096 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:01.448158  585096 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:01.450201  585096 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142496" cluster and "default" namespace by default
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:59.438777  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:59.438814  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:59.945253  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:59.945302  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:00.016570  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:00.016607  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:00.034150  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:00.034183  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:00.075423  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:00.075456  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:00.111132  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:00.111164  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.646570  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:02.666594  584371 api_server.go:72] duration metric: took 4m13.762192057s to wait for apiserver process to appear ...
	I1008 19:13:02.666620  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:02.666663  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:02.666718  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:02.704214  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:02.704242  584371 cri.go:89] found id: ""
	I1008 19:13:02.704250  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:02.704298  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.708636  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:02.708717  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:02.748418  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:02.748444  584371 cri.go:89] found id: ""
	I1008 19:13:02.748455  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:02.748515  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.753267  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:02.753332  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:02.790534  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:02.790562  584371 cri.go:89] found id: ""
	I1008 19:13:02.790571  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:02.790636  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.794880  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:02.794950  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:02.834754  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:02.834774  584371 cri.go:89] found id: ""
	I1008 19:13:02.834781  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:02.834830  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.839391  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:02.839463  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:02.878344  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:02.878371  584371 cri.go:89] found id: ""
	I1008 19:13:02.878380  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:02.878425  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.882939  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:02.883025  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:02.920081  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:02.920104  584371 cri.go:89] found id: ""
	I1008 19:13:02.920112  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:02.920168  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.924141  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:02.924205  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:02.959700  584371 cri.go:89] found id: ""
	I1008 19:13:02.959730  584371 logs.go:282] 0 containers: []
	W1008 19:13:02.959741  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:02.959750  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:02.959822  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:02.996900  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.996927  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:02.996933  584371 cri.go:89] found id: ""
	I1008 19:13:02.996940  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:02.996989  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.001152  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.005021  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:03.005046  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:03.069775  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:03.069813  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:03.120028  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:03.120060  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:03.155756  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:03.155784  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:03.195587  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:03.195624  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:03.231844  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:03.231875  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:03.271156  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:03.271187  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:03.286994  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:03.287017  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:03.397237  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:03.397269  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:03.442373  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:03.442407  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:03.500191  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:03.500222  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:03.535448  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:03.535490  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:03.966382  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:03.966425  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:06.513885  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:13:06.518111  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:13:06.519310  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:06.519331  584371 api_server.go:131] duration metric: took 3.852704338s to wait for apiserver health ...
	I1008 19:13:06.519341  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:06.519370  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:06.519417  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:06.558940  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:06.558965  584371 cri.go:89] found id: ""
	I1008 19:13:06.558979  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:06.559029  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.563471  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:06.563537  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:06.607844  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:06.607873  584371 cri.go:89] found id: ""
	I1008 19:13:06.607883  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:06.607944  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.612399  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:06.612456  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:06.645502  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:06.645521  584371 cri.go:89] found id: ""
	I1008 19:13:06.645528  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:06.645575  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.649442  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:06.649519  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:06.685085  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:06.685114  584371 cri.go:89] found id: ""
	I1008 19:13:06.685126  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:06.685183  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.689859  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:06.689935  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:06.724775  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:06.724803  584371 cri.go:89] found id: ""
	I1008 19:13:06.724814  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:06.724873  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.729489  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:06.729542  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:06.776599  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:06.776626  584371 cri.go:89] found id: ""
	I1008 19:13:06.776636  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:06.776704  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.780790  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:06.780863  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:06.817072  584371 cri.go:89] found id: ""
	I1008 19:13:06.817097  584371 logs.go:282] 0 containers: []
	W1008 19:13:06.817106  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:06.817113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:06.817171  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:06.855429  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:06.855453  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:06.855457  584371 cri.go:89] found id: ""
	I1008 19:13:06.855465  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:06.855520  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.859774  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.863800  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:06.863821  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:06.931413  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:06.931443  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:06.946213  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:06.946236  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:07.070604  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:07.070640  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:07.114749  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:07.114782  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:07.152555  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:07.152584  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:07.192730  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:07.192759  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:07.242001  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:07.242036  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:07.612662  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:07.612714  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:07.656655  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:07.656700  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:07.695462  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:07.695494  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:07.733107  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:07.733143  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:07.779348  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:07.779382  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:10.325584  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:13:10.325616  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.325620  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.325624  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.325628  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.325631  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.325634  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.325639  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.325644  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.325651  584371 system_pods.go:74] duration metric: took 3.806304739s to wait for pod list to return data ...
	I1008 19:13:10.325659  584371 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:10.328062  584371 default_sa.go:45] found service account: "default"
	I1008 19:13:10.328082  584371 default_sa.go:55] duration metric: took 2.41797ms for default service account to be created ...
	I1008 19:13:10.328089  584371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:10.332201  584371 system_pods.go:86] 8 kube-system pods found
	I1008 19:13:10.332224  584371 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.332229  584371 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.332233  584371 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.332237  584371 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.332241  584371 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.332245  584371 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.332250  584371 system_pods.go:89] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.332254  584371 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.332261  584371 system_pods.go:126] duration metric: took 4.167739ms to wait for k8s-apps to be running ...
	I1008 19:13:10.332270  584371 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:10.332313  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:10.350257  584371 system_svc.go:56] duration metric: took 17.979349ms WaitForService to wait for kubelet
	I1008 19:13:10.350288  584371 kubeadm.go:582] duration metric: took 4m21.445892386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:10.350310  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:10.352582  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:10.352598  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:10.352609  584371 node_conditions.go:105] duration metric: took 2.294326ms to run NodePressure ...
	I1008 19:13:10.352620  584371 start.go:241] waiting for startup goroutines ...
	I1008 19:13:10.352626  584371 start.go:246] waiting for cluster config update ...
	I1008 19:13:10.352636  584371 start.go:255] writing updated cluster config ...
	I1008 19:13:10.352882  584371 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:10.401998  584371 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:10.404037  584371 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 
	
	
	==> CRI-O <==
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.457572746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f77445e-c62a-4b3d-b2e7-6e48a7aa4702 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.458723103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c40b894e-82c7-4831-80ad-960696b69000 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.459092820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415287459072206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c40b894e-82c7-4831-80ad-960696b69000 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.459748858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7be3da92-b7d4-43ec-944d-af64676057b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.459800748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7be3da92-b7d4-43ec-944d-af64676057b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.460087277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7be3da92-b7d4-43ec-944d-af64676057b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.496797968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ea4e423-ad14-4d6f-95e8-d815ee99948c name=/runtime.v1.RuntimeService/Version
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.496868292Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ea4e423-ad14-4d6f-95e8-d815ee99948c name=/runtime.v1.RuntimeService/Version
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.503740674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5783dcc6-8ae1-47dc-9ec5-e804b0a1303f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.504629803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415287504605399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5783dcc6-8ae1-47dc-9ec5-e804b0a1303f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.505283758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=971f0bf4-953c-4015-a8e0-311f219475c2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.505347671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=971f0bf4-953c-4015-a8e0-311f219475c2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.505634354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=971f0bf4-953c-4015-a8e0-311f219475c2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.537170374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ecbc0a5-7ad8-4380-96e8-65a0af49af38 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.537254273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ecbc0a5-7ad8-4380-96e8-65a0af49af38 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.538834604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640d8193-ff3a-40a9-99f3-cf5810b56f9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.539236375Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415287539214045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640d8193-ff3a-40a9-99f3-cf5810b56f9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.539732117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=738b549e-1ced-4418-bf2d-40737d0da2bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.539779954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=738b549e-1ced-4418-bf2d-40737d0da2bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.539967732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=738b549e-1ced-4418-bf2d-40737d0da2bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.544939574Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=26a7244a-a6e7-4226-8a01-56899e89a791 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.545193367Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kh9nk,Uid:4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414461346922292,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:07:33.479048432Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&PodSandboxMetadata{Name:busybox,Uid:bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,Namespace:default,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1728414461344977487,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:07:33.479046687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bffb4289c732f7d952ff07e2cfedf930ad0254595b7befd11470a134b59c5e8a,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-4d48d,Uid:7d305dc9-31d0-482b-8b3e-82be14daeaf0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414459547148934,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-4d48d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d305dc9-31d0-482b-8b3e-82be14daeaf0,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:07:33.
479002505Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&PodSandboxMetadata{Name:kube-proxy-9l7t7,Uid:20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414454696058883,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:07:33.479009918Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2ad6a8a6-5f69-4323-b540-2f8d330d8d84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414454691136210,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-10-08T19:07:33.479008217Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-783146,Uid:a4e7ef45f15d8d483fe00339800dc812,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414448978821092,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.183:2379,kubernetes.io/config.hash: a4e7ef45f15d8d483fe00339800dc812,kubernetes.io/config.seen: 2024-10-08T19:07:28.549010206Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-
certs-783146,Uid:57b4da06010d0f3489a51e057e14ecd8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414448976147911,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e14ecd8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57b4da06010d0f3489a51e057e14ecd8,kubernetes.io/config.seen: 2024-10-08T19:07:28.462551684Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-783146,Uid:e4b11e70ade621b4409a16d9ac18a734,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414448969050286,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-783
146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e4b11e70ade621b4409a16d9ac18a734,kubernetes.io/config.seen: 2024-10-08T19:07:28.462552796Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-783146,Uid:4063985ffee1796af14cc67de0ba713a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414448955199678,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.183:8443,kubernetes.io/config.hash: 4063985ffee1796af14cc67de0
ba713a,kubernetes.io/config.seen: 2024-10-08T19:07:28.462548522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=26a7244a-a6e7-4226-8a01-56899e89a791 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.545858382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13a97ae4-3328-4624-b0e6-ee87431e3577 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.545905841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13a97ae4-3328-4624-b0e6-ee87431e3577 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:21:27 embed-certs-783146 crio[694]: time="2024-10-08 19:21:27.546075151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13a97ae4-3328-4624-b0e6-ee87431e3577 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e05aeedd245a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   f738edcebb0b9       storage-provisioner
	8fd99acb7c464       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3769c1b3d855d       busybox
	b4aceabf5c4e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   8cdd040a91ddc       coredns-7c65d6cfc9-kh9nk
	44cb46dbe3fe0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   b0868e02645b7       kube-proxy-9l7t7
	ffa903de853fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   f738edcebb0b9       storage-provisioner
	2a7606685755c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   beda36eaf3c3e       kube-controller-manager-embed-certs-783146
	639ce8bca3484       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   adcb3c5a432af       kube-scheduler-embed-certs-783146
	8355c440ac929       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   391c64cf760e9       kube-apiserver-embed-certs-783146
	ef34a632006c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   947ba3da483b3       etcd-embed-certs-783146
	
	
	==> coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55053 - 58221 "HINFO IN 6943118436927033031.900514570035518152. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017688127s
	
	
	==> describe nodes <==
	Name:               embed-certs-783146
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-783146
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=embed-certs-783146
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T19_00_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 19:00:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-783146
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 19:21:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 19:18:15 +0000   Tue, 08 Oct 2024 19:00:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 19:18:15 +0000   Tue, 08 Oct 2024 19:00:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 19:18:15 +0000   Tue, 08 Oct 2024 19:00:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 19:18:15 +0000   Tue, 08 Oct 2024 19:07:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.183
	  Hostname:    embed-certs-783146
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bba105b17a9417f8d6ef151a389204d
	  System UUID:                0bba105b-17a9-417f-8d6e-f151a389204d
	  Boot ID:                    9643f9ed-a128-450c-a636-5c655cbc3124
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-kh9nk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-783146                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-783146             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-783146    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-9l7t7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-783146             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-4d48d               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-783146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-783146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-783146 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-783146 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-783146 event: Registered Node embed-certs-783146 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-783146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-783146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-783146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-783146 event: Registered Node embed-certs-783146 in Controller
	
	
	==> dmesg <==
	[Oct 8 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.817083] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.441179] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.490071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.395884] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.054279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051661] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.206186] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.122442] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.292595] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +4.003135] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +2.202305] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.077030] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.417793] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.576030] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +4.342643] kauditd_printk_skb: 80 callbacks suppressed
	[Oct 8 19:08] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] <==
	{"level":"info","ts":"2024-10-08T19:07:49.584121Z","caller":"traceutil/trace.go:171","msg":"trace[1940259903] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"390.219329ms","start":"2024-10-08T19:07:49.193879Z","end":"2024-10-08T19:07:49.584098Z","steps":["trace[1940259903] 'process raft request'  (duration: 389.751354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:07:49.584882Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T19:07:49.193864Z","time spent":"390.453826ms","remote":"127.0.0.1:49002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7058,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" value_size:6990 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" > >"}
	{"level":"warn","ts":"2024-10-08T19:07:50.219383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.157054ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:07:50.219561Z","caller":"traceutil/trace.go:171","msg":"trace[900855689] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:590; }","duration":"416.356857ms","start":"2024-10-08T19:07:49.803191Z","end":"2024-10-08T19:07:50.219548Z","steps":["trace[900855689] 'range keys from in-memory index tree'  (duration: 416.143676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:07:50.219407Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.206859ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:07:50.219606Z","caller":"traceutil/trace.go:171","msg":"trace[1876856376] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:590; }","duration":"416.408646ms","start":"2024-10-08T19:07:49.803192Z","end":"2024-10-08T19:07:50.219601Z","steps":["trace[1876856376] 'range keys from in-memory index tree'  (duration: 416.200532ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:07:50.219684Z","caller":"traceutil/trace.go:171","msg":"trace[1258042036] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:629; }","duration":"473.448974ms","start":"2024-10-08T19:07:49.746225Z","end":"2024-10-08T19:07:50.219674Z","steps":["trace[1258042036] 'read index received'  (duration: 379.426015ms)","trace[1258042036] 'applied index is now lower than readState.Index'  (duration: 94.022085ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-08T19:07:50.220042Z","caller":"traceutil/trace.go:171","msg":"trace[1024340590] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"624.084072ms","start":"2024-10-08T19:07:49.595943Z","end":"2024-10-08T19:07:50.220027Z","steps":["trace[1024340590] 'process raft request'  (duration: 529.754948ms)","trace[1024340590] 'compare'  (duration: 93.613841ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-08T19:07:50.220184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T19:07:49.595928Z","time spent":"624.19856ms","remote":"127.0.0.1:49002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6866,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" mod_revision:590 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" value_size:6798 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" > >"}
	{"level":"warn","ts":"2024-10-08T19:07:50.220238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.836503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-783146\" ","response":"range_response_count:1 size:6881"}
	{"level":"info","ts":"2024-10-08T19:07:50.220298Z","caller":"traceutil/trace.go:171","msg":"trace[1924967583] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-embed-certs-783146; range_end:; response_count:1; response_revision:591; }","duration":"221.893273ms","start":"2024-10-08T19:07:49.998395Z","end":"2024-10-08T19:07:50.220288Z","steps":["trace[1924967583] 'agreement among raft nodes before linearized reading'  (duration: 221.812712ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:07:50.410076Z","caller":"traceutil/trace.go:171","msg":"trace[1735716275] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"182.099413ms","start":"2024-10-08T19:07:50.227954Z","end":"2024-10-08T19:07:50.410054Z","steps":["trace[1735716275] 'read index received'  (duration: 92.428108ms)","trace[1735716275] 'applied index is now lower than readState.Index'  (duration: 89.670406ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-08T19:07:50.410296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.317957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-783146\" ","response":"range_response_count:1 size:5487"}
	{"level":"info","ts":"2024-10-08T19:07:50.410350Z","caller":"traceutil/trace.go:171","msg":"trace[1932860350] range","detail":"{range_begin:/registry/minions/embed-certs-783146; range_end:; response_count:1; response_revision:591; }","duration":"182.389316ms","start":"2024-10-08T19:07:50.227951Z","end":"2024-10-08T19:07:50.410340Z","steps":["trace[1932860350] 'agreement among raft nodes before linearized reading'  (duration: 182.240472ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:07:50.410545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.539127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-783146\" ","response":"range_response_count:1 size:6815"}
	{"level":"info","ts":"2024-10-08T19:07:50.410630Z","caller":"traceutil/trace.go:171","msg":"trace[260052090] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-783146; range_end:; response_count:1; response_revision:591; }","duration":"182.605878ms","start":"2024-10-08T19:07:50.227988Z","end":"2024-10-08T19:07:50.410594Z","steps":["trace[260052090] 'agreement among raft nodes before linearized reading'  (duration: 182.279257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:08:10.228809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.418026ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4360771338324154968 > lease_revoke:<id:3c84926d878419d4>","response":"size:29"}
	{"level":"info","ts":"2024-10-08T19:08:10.229338Z","caller":"traceutil/trace.go:171","msg":"trace[173322616] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:654; }","duration":"426.129868ms","start":"2024-10-08T19:08:09.803173Z","end":"2024-10-08T19:08:10.229303Z","steps":["trace[173322616] 'read index received'  (duration: 166.981037ms)","trace[173322616] 'applied index is now lower than readState.Index'  (duration: 259.14736ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-08T19:08:10.229522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.336939ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:08:10.229573Z","caller":"traceutil/trace.go:171","msg":"trace[601122315] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:611; }","duration":"426.397424ms","start":"2024-10-08T19:08:09.803168Z","end":"2024-10-08T19:08:10.229565Z","steps":["trace[601122315] 'agreement among raft nodes before linearized reading'  (duration: 426.271977ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:08:10.229692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.075712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4d48d\" ","response":"range_response_count:1 size:4386"}
	{"level":"info","ts":"2024-10-08T19:08:10.229832Z","caller":"traceutil/trace.go:171","msg":"trace[1316531411] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-4d48d; range_end:; response_count:1; response_revision:611; }","duration":"295.14926ms","start":"2024-10-08T19:08:09.934591Z","end":"2024-10-08T19:08:10.229741Z","steps":["trace[1316531411] 'agreement among raft nodes before linearized reading'  (duration: 294.926192ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:17:32.264097Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":835}
	{"level":"info","ts":"2024-10-08T19:17:32.273891Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":835,"took":"9.181302ms","hash":2078501744,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2662400,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-08T19:17:32.273975Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2078501744,"revision":835,"compact-revision":-1}
	
	
	==> kernel <==
	 19:21:27 up 14 min,  0 users,  load average: 0.01, 0.10, 0.08
	Linux embed-certs-783146 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] <==
	W1008 19:17:34.513565       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:17:34.513824       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:17:34.515428       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:17:34.515502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:18:34.515916       1 handler_proxy.go:99] no RequestInfo found in the context
	W1008 19:18:34.515913       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:18:34.516242       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1008 19:18:34.516313       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:18:34.517502       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:18:34.517508       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:20:34.517900       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:20:34.518008       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:20:34.517945       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:20:34.518102       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:20:34.519349       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:20:34.519418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] <==
	E1008 19:16:07.142058       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:16:07.649207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:16:37.149650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:16:37.657916       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:17:07.156250       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:17:07.666627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:17:37.162561       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:17:37.676751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:18:07.168710       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:18:07.684795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:18:15.833976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-783146"
	I1008 19:18:35.557920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="324.655µs"
	E1008 19:18:37.175376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:18:37.693208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:18:50.555836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.485µs"
	E1008 19:19:07.182566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:19:07.699654       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:19:37.187760       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:19:37.707748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:20:07.194815       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:20:07.715238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:20:37.201799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:20:37.722702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:21:07.207658       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:21:07.731066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 19:07:35.112970       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 19:07:35.126657       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.183"]
	E1008 19:07:35.126854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 19:07:35.156909       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 19:07:35.156960       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 19:07:35.156983       1 server_linux.go:169] "Using iptables Proxier"
	I1008 19:07:35.159283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 19:07:35.159617       1 server.go:483] "Version info" version="v1.31.1"
	I1008 19:07:35.159642       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:07:35.161200       1 config.go:199] "Starting service config controller"
	I1008 19:07:35.161238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 19:07:35.161256       1 config.go:105] "Starting endpoint slice config controller"
	I1008 19:07:35.161259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 19:07:35.161626       1 config.go:328] "Starting node config controller"
	I1008 19:07:35.161656       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 19:07:35.261722       1 shared_informer.go:320] Caches are synced for node config
	I1008 19:07:35.261807       1 shared_informer.go:320] Caches are synced for service config
	I1008 19:07:35.261817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] <==
	I1008 19:07:31.582352       1 serving.go:386] Generated self-signed cert in-memory
	W1008 19:07:33.452122       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 19:07:33.452165       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 19:07:33.452178       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 19:07:33.452187       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 19:07:33.524092       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1008 19:07:33.524274       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:07:33.527950       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 19:07:33.528007       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 19:07:33.528432       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 19:07:33.528674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 19:07:33.628966       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 19:20:12 embed-certs-783146 kubelet[901]: E1008 19:20:12.537634     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:20:18 embed-certs-783146 kubelet[901]: E1008 19:20:18.719665     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415218719370392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:18 embed-certs-783146 kubelet[901]: E1008 19:20:18.719715     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415218719370392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:23 embed-certs-783146 kubelet[901]: E1008 19:20:23.538183     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]: E1008 19:20:28.564063     901 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]: E1008 19:20:28.721649     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415228721090239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:28 embed-certs-783146 kubelet[901]: E1008 19:20:28.721720     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415228721090239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:38 embed-certs-783146 kubelet[901]: E1008 19:20:38.540789     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:20:38 embed-certs-783146 kubelet[901]: E1008 19:20:38.724022     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415238723664979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:38 embed-certs-783146 kubelet[901]: E1008 19:20:38.724088     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415238723664979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:48 embed-certs-783146 kubelet[901]: E1008 19:20:48.727130     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415248726756445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:48 embed-certs-783146 kubelet[901]: E1008 19:20:48.727161     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415248726756445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:50 embed-certs-783146 kubelet[901]: E1008 19:20:50.538249     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:20:58 embed-certs-783146 kubelet[901]: E1008 19:20:58.728377     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415258728054557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:58 embed-certs-783146 kubelet[901]: E1008 19:20:58.728856     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415258728054557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:05 embed-certs-783146 kubelet[901]: E1008 19:21:05.537506     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:21:08 embed-certs-783146 kubelet[901]: E1008 19:21:08.730661     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415268730345356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:08 embed-certs-783146 kubelet[901]: E1008 19:21:08.731006     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415268730345356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:18 embed-certs-783146 kubelet[901]: E1008 19:21:18.538065     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:21:18 embed-certs-783146 kubelet[901]: E1008 19:21:18.732687     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415278732308376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:18 embed-certs-783146 kubelet[901]: E1008 19:21:18.732747     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415278732308376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] <==
	I1008 19:08:05.864350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 19:08:05.882900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 19:08:05.882997       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 19:08:23.284848       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 19:08:23.285054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-783146_577336a5-f01e-431e-a81b-e9bab9aca163!
	I1008 19:08:23.286789       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2457038-be1d-43b0-881b-88857d3f7f63", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-783146_577336a5-f01e-431e-a81b-e9bab9aca163 became leader
	I1008 19:08:23.385991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-783146_577336a5-f01e-431e-a81b-e9bab9aca163!
	
	
	==> storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] <==
	I1008 19:07:35.042028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 19:08:05.045506       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783146 -n embed-certs-783146
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-783146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4d48d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-783146 describe pod metrics-server-6867b74b74-4d48d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-783146 describe pod metrics-server-6867b74b74-4d48d: exit status 1 (62.777736ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4d48d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-783146 describe pod metrics-server-6867b74b74-4d48d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-08 19:22:01.977267313 +0000 UTC m=+6515.466419491
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-142496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-142496 logs -n 25: (1.990580496s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-038693 sudo                            | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                 | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:04:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:20.270613  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:23.342576  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:04:29.422638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:32.494606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:38.574600  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:41.646592  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:47.726606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:50.798595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:56.878669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:59.950607  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:06.030583  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:09.102584  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:15.182571  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:18.254590  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:24.334638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:27.406606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:33.486619  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:36.558552  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:42.638565  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:45.710610  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:51.790561  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:54.862591  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:00.942606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:04.014669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:10.094618  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:13.166598  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:19.246573  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:22.318595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:28.398732  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:31.470685  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:37.550574  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:40.622614  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:46.702620  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:49.774581  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:55.854627  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:58.926568  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:07:01.929445  585014 start.go:364] duration metric: took 3m15.782086174s to acquireMachinesLock for "embed-certs-783146"
	I1008 19:07:01.929517  585014 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:01.929523  585014 fix.go:54] fixHost starting: 
	I1008 19:07:01.929889  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:01.929945  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:01.945409  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 19:07:01.945858  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:01.946357  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:01.946387  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:01.946744  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:01.946895  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:01.947028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:01.948399  585014 fix.go:112] recreateIfNeeded on embed-certs-783146: state=Stopped err=<nil>
	I1008 19:07:01.948419  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	W1008 19:07:01.948545  585014 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:01.954020  585014 out.go:177] * Restarting existing kvm2 VM for "embed-certs-783146" ...
	I1008 19:07:01.926825  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:01.926871  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927219  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:07:01.927270  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927475  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:07:01.929278  584371 machine.go:96] duration metric: took 4m37.425232924s to provisionDockerMachine
	I1008 19:07:01.929341  584371 fix.go:56] duration metric: took 4m37.445578307s for fixHost
	I1008 19:07:01.929349  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 4m37.445609603s
	W1008 19:07:01.929369  584371 start.go:714] error starting host: provision: host is not running
	W1008 19:07:01.929510  584371 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1008 19:07:01.929524  584371 start.go:729] Will try again in 5 seconds ...
	I1008 19:07:01.955309  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Start
	I1008 19:07:01.955452  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 19:07:01.956122  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 19:07:01.956432  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 19:07:01.956743  585014 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 19:07:01.957427  585014 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 19:07:03.159229  585014 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 19:07:03.160116  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.160503  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.160565  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.160497  585935 retry.go:31] will retry after 282.873854ms: waiting for machine to come up
	I1008 19:07:03.445297  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.445810  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.445838  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.445740  585935 retry.go:31] will retry after 344.936527ms: waiting for machine to come up
	I1008 19:07:03.792413  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.792802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.792837  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.792741  585935 retry.go:31] will retry after 414.968289ms: waiting for machine to come up
	I1008 19:07:04.209200  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.209532  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.209555  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.209502  585935 retry.go:31] will retry after 403.180416ms: waiting for machine to come up
	I1008 19:07:04.614156  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.614679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.614713  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.614636  585935 retry.go:31] will retry after 631.841511ms: waiting for machine to come up
	I1008 19:07:05.248574  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.248983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.249015  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.248917  585935 retry.go:31] will retry after 639.776909ms: waiting for machine to come up
	I1008 19:07:05.890868  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.891332  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.891406  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.891329  585935 retry.go:31] will retry after 764.489176ms: waiting for machine to come up
	I1008 19:07:06.931497  584371 start.go:360] acquireMachinesLock for no-preload-966632: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:06.657130  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:06.657520  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:06.657550  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:06.657462  585935 retry.go:31] will retry after 1.348973281s: waiting for machine to come up
	I1008 19:07:08.008293  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:08.008779  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:08.008805  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:08.008740  585935 retry.go:31] will retry after 1.146283289s: waiting for machine to come up
	I1008 19:07:09.157106  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:09.157517  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:09.157546  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:09.157493  585935 retry.go:31] will retry after 1.510430686s: waiting for machine to come up
	I1008 19:07:10.669393  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:10.669802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:10.669831  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:10.669749  585935 retry.go:31] will retry after 2.380864418s: waiting for machine to come up
	I1008 19:07:13.053078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:13.053487  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:13.053512  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:13.053427  585935 retry.go:31] will retry after 2.553865951s: waiting for machine to come up
	I1008 19:07:15.610098  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:15.610501  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:15.610535  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:15.610428  585935 retry.go:31] will retry after 4.018444789s: waiting for machine to come up
	I1008 19:07:20.967039  585096 start.go:364] duration metric: took 3m30.476693248s to acquireMachinesLock for "default-k8s-diff-port-142496"
	I1008 19:07:20.967105  585096 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:20.967115  585096 fix.go:54] fixHost starting: 
	I1008 19:07:20.967619  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:20.967675  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:20.984936  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1008 19:07:20.985358  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:20.985869  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:07:20.985896  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:20.986199  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:20.986380  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:20.986520  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:07:20.987828  585096 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142496: state=Stopped err=<nil>
	I1008 19:07:20.987867  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	W1008 19:07:20.988020  585096 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:20.990029  585096 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142496" ...
	I1008 19:07:19.632076  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632468  585014 main.go:141] libmachine: (embed-certs-783146) Found IP for machine: 192.168.72.183
	I1008 19:07:19.632504  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has current primary IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632511  585014 main.go:141] libmachine: (embed-certs-783146) Reserving static IP address...
	I1008 19:07:19.632968  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.633020  585014 main.go:141] libmachine: (embed-certs-783146) DBG | skip adding static IP to network mk-embed-certs-783146 - found existing host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"}
	I1008 19:07:19.633041  585014 main.go:141] libmachine: (embed-certs-783146) Reserved static IP address: 192.168.72.183
	I1008 19:07:19.633062  585014 main.go:141] libmachine: (embed-certs-783146) Waiting for SSH to be available...
	I1008 19:07:19.633073  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Getting to WaitForSSH function...
	I1008 19:07:19.634939  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635221  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.635249  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635415  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH client type: external
	I1008 19:07:19.635453  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa (-rw-------)
	I1008 19:07:19.635496  585014 main.go:141] libmachine: (embed-certs-783146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:19.635509  585014 main.go:141] libmachine: (embed-certs-783146) DBG | About to run SSH command:
	I1008 19:07:19.635522  585014 main.go:141] libmachine: (embed-certs-783146) DBG | exit 0
	I1008 19:07:19.758276  585014 main.go:141] libmachine: (embed-certs-783146) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:19.758658  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 19:07:19.759310  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:19.761990  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.762456  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762803  585014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:07:19.763012  585014 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:19.763034  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:19.763271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.765523  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765829  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.765858  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765988  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.766159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766289  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766433  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.766589  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.766877  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.766891  585014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:19.866272  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:19.866297  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866563  585014 buildroot.go:166] provisioning hostname "embed-certs-783146"
	I1008 19:07:19.866585  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866799  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.869295  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869648  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.869679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869836  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.870017  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870153  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870293  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.870444  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.870621  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.870636  585014 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-783146 && echo "embed-certs-783146" | sudo tee /etc/hostname
	I1008 19:07:19.983892  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-783146
	
	I1008 19:07:19.983925  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.986430  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986776  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.986806  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986922  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.987104  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.987588  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.987746  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.987762  585014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-783146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-783146/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-783146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:20.095178  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:20.095212  585014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:20.095264  585014 buildroot.go:174] setting up certificates
	I1008 19:07:20.095276  585014 provision.go:84] configureAuth start
	I1008 19:07:20.095288  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:20.095578  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.098000  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098431  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.098459  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098591  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.100935  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101241  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.101271  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101393  585014 provision.go:143] copyHostCerts
	I1008 19:07:20.101452  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:20.101463  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:20.101544  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:20.101807  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:20.101824  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:20.101873  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:20.102015  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:20.102029  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:20.102075  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:20.102152  585014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-783146 san=[127.0.0.1 192.168.72.183 embed-certs-783146 localhost minikube]
	I1008 19:07:20.378020  585014 provision.go:177] copyRemoteCerts
	I1008 19:07:20.378093  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:20.378133  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.380678  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381017  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.381050  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381175  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.381386  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.381579  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.381717  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.464627  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:20.487853  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:07:20.510174  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:07:20.532381  585014 provision.go:87] duration metric: took 437.094502ms to configureAuth
	I1008 19:07:20.532405  585014 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:20.532571  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:20.532669  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.535064  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.535382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.535753  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.535920  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.536039  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.536193  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.536406  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.536429  585014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:20.745937  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:20.745967  585014 machine.go:96] duration metric: took 982.940955ms to provisionDockerMachine
	I1008 19:07:20.745980  585014 start.go:293] postStartSetup for "embed-certs-783146" (driver="kvm2")
	I1008 19:07:20.745994  585014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:20.746012  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.746380  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:20.746417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.749056  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749395  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.749425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749566  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.749738  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.749852  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.749943  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.828580  585014 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:20.832894  585014 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:20.832923  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:20.832994  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:20.833069  585014 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:20.833162  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:20.842230  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:20.864957  585014 start.go:296] duration metric: took 118.964089ms for postStartSetup
	I1008 19:07:20.865006  585014 fix.go:56] duration metric: took 18.93548189s for fixHost
	I1008 19:07:20.865029  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.867709  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868089  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.868113  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868223  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.868425  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868583  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868742  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.868926  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.869159  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.869175  585014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:20.966898  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414440.940275348
	
	I1008 19:07:20.966919  585014 fix.go:216] guest clock: 1728414440.940275348
	I1008 19:07:20.966926  585014 fix.go:229] Guest: 2024-10-08 19:07:20.940275348 +0000 UTC Remote: 2024-10-08 19:07:20.865011917 +0000 UTC m=+214.857488447 (delta=75.263431ms)
	I1008 19:07:20.966948  585014 fix.go:200] guest clock delta is within tolerance: 75.263431ms
	I1008 19:07:20.966953  585014 start.go:83] releasing machines lock for "embed-certs-783146", held for 19.037463535s
	I1008 19:07:20.966979  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.967246  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.969983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.970386  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970586  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971061  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971243  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971340  585014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:20.971382  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.971487  585014 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:20.971515  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.974211  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974581  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974632  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.974695  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974872  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.974999  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.975024  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.975028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975184  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975228  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.975374  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975501  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.975559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975709  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:21.072152  585014 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:21.078116  585014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:21.221176  585014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:21.227359  585014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:21.227434  585014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:21.242691  585014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:21.242716  585014 start.go:495] detecting cgroup driver to use...
	I1008 19:07:21.242796  585014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:21.257429  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:21.270208  585014 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:21.270258  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:21.282826  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:21.295827  585014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:21.405804  585014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:21.572147  585014 docker.go:233] disabling docker service ...
	I1008 19:07:21.572231  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:21.586083  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:21.598657  585014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:21.722224  585014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:21.853317  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:21.867234  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:21.884872  585014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:21.884949  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.895154  585014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:21.895223  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.905371  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.915602  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.926026  585014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:21.938089  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.949261  585014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.966211  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.978120  585014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:21.987631  585014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:21.987693  585014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:22.002185  585014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:22.013111  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:22.135933  585014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:22.230256  585014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:22.230342  585014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:22.235005  585014 start.go:563] Will wait 60s for crictl version
	I1008 19:07:22.235076  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:07:22.238991  585014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:22.279302  585014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:22.279391  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.308343  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.337272  585014 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:20.991759  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Start
	I1008 19:07:20.991997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring networks are active...
	I1008 19:07:20.992703  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network default is active
	I1008 19:07:20.993057  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network mk-default-k8s-diff-port-142496 is active
	I1008 19:07:20.993435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Getting domain xml...
	I1008 19:07:20.994209  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Creating domain...
	I1008 19:07:22.240185  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting to get IP...
	I1008 19:07:22.240949  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241417  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241469  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.241382  586083 retry.go:31] will retry after 234.248435ms: waiting for machine to come up
	I1008 19:07:22.476800  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477343  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477375  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.477275  586083 retry.go:31] will retry after 323.851452ms: waiting for machine to come up
	I1008 19:07:22.802997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803574  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803610  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.803516  586083 retry.go:31] will retry after 445.299956ms: waiting for machine to come up
	I1008 19:07:23.250211  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250686  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250715  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.250651  586083 retry.go:31] will retry after 574.786836ms: waiting for machine to come up
	I1008 19:07:23.827535  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828010  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.827959  586083 retry.go:31] will retry after 563.165045ms: waiting for machine to come up
	I1008 19:07:24.393150  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393741  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393792  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.393717  586083 retry.go:31] will retry after 576.443855ms: waiting for machine to come up
	I1008 19:07:24.971698  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972132  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972161  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.972090  586083 retry.go:31] will retry after 999.17904ms: waiting for machine to come up
	I1008 19:07:22.338812  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:22.341998  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:22.342417  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342680  585014 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:22.346863  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:22.359456  585014 kubeadm.go:883] updating cluster {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:22.359630  585014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:22.359692  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:22.394832  585014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:22.394893  585014 ssh_runner.go:195] Run: which lz4
	I1008 19:07:22.398935  585014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:22.403100  585014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:22.403127  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:23.771685  585014 crio.go:462] duration metric: took 1.372780034s to copy over tarball
	I1008 19:07:23.771769  585014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:25.816508  585014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044704362s)
	I1008 19:07:25.816547  585014 crio.go:469] duration metric: took 2.04482777s to extract the tarball
	I1008 19:07:25.816557  585014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:25.852980  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:25.893366  585014 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:25.893391  585014 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:25.893399  585014 kubeadm.go:934] updating node { 192.168.72.183 8443 v1.31.1 crio true true} ...
	I1008 19:07:25.893517  585014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-783146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:25.893579  585014 ssh_runner.go:195] Run: crio config
	I1008 19:07:25.934828  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:25.934850  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:25.934874  585014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:25.934906  585014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.183 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-783146 NodeName:embed-certs-783146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:25.935039  585014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-783146"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:25.935106  585014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:25.944851  585014 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:25.944919  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:25.954022  585014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1008 19:07:25.979675  585014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:26.001147  585014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1008 19:07:26.017613  585014 ssh_runner.go:195] Run: grep 192.168.72.183	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:26.021401  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:26.033347  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:25.972405  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972868  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972891  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:25.972831  586083 retry.go:31] will retry after 1.186801161s: waiting for machine to come up
	I1008 19:07:27.161319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161877  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161900  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:27.161823  586083 retry.go:31] will retry after 1.448383195s: waiting for machine to come up
	I1008 19:07:28.611319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611697  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:28.611613  586083 retry.go:31] will retry after 1.738948191s: waiting for machine to come up
	I1008 19:07:30.352081  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352582  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352617  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:30.352530  586083 retry.go:31] will retry after 2.624799898s: waiting for machine to come up
	I1008 19:07:26.138298  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:26.154419  585014 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146 for IP: 192.168.72.183
	I1008 19:07:26.154447  585014 certs.go:194] generating shared ca certs ...
	I1008 19:07:26.154470  585014 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:26.154651  585014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:26.154714  585014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:26.154729  585014 certs.go:256] generating profile certs ...
	I1008 19:07:26.154860  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/client.key
	I1008 19:07:26.154948  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key.b07aac04
	I1008 19:07:26.155003  585014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key
	I1008 19:07:26.155159  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:26.155202  585014 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:26.155212  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:26.155232  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:26.155256  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:26.155280  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:26.155319  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:26.156076  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:26.187225  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:26.235804  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:26.268034  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:26.292729  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 19:07:26.320118  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:26.351058  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:26.374004  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:07:26.396526  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:26.419067  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:26.441449  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:26.463768  585014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:26.479471  585014 ssh_runner.go:195] Run: openssl version
	I1008 19:07:26.484957  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:26.495286  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501166  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501225  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.507154  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:26.517587  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:26.528157  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532896  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532967  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.540724  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:26.554952  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:26.567160  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571304  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571394  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.576974  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:26.587198  585014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:26.591621  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:26.597176  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:26.602766  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:26.608373  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:26.613797  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:26.619310  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:26.624702  585014 kubeadm.go:392] StartCluster: {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:26.624831  585014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:26.624878  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.666183  585014 cri.go:89] found id: ""
	I1008 19:07:26.666253  585014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:26.676621  585014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:26.676644  585014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:26.676699  585014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:26.686549  585014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:26.687532  585014 kubeconfig.go:125] found "embed-certs-783146" server: "https://192.168.72.183:8443"
	I1008 19:07:26.689545  585014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:26.698758  585014 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.183
	I1008 19:07:26.698790  585014 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:26.698804  585014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:26.698856  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.738148  585014 cri.go:89] found id: ""
	I1008 19:07:26.738209  585014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:26.753980  585014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:26.763186  585014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:26.763208  585014 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:26.763257  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:07:26.771789  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:26.771847  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:26.780812  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:07:26.789329  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:26.789390  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:26.798230  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.806781  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:26.806842  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.815549  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:07:26.823782  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:26.823830  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:26.832698  585014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:26.841687  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:26.945569  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.159232  585014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213619978s)
	I1008 19:07:28.159280  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.372727  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.456082  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.567486  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:28.567627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.067909  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.568466  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.068627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.567821  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.604366  585014 api_server.go:72] duration metric: took 2.036885191s to wait for apiserver process to appear ...
	I1008 19:07:30.604403  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:30.604440  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.461223  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.461270  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.461286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.499425  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.499473  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.604563  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.614594  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:33.614625  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.105286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.111706  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:34.111747  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.605326  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.612912  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:07:34.619204  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:34.619227  585014 api_server.go:131] duration metric: took 4.014816798s to wait for apiserver health ...
	I1008 19:07:34.619236  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:34.619242  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:34.621043  585014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:32.980593  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981141  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981171  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:32.981076  586083 retry.go:31] will retry after 3.401015855s: waiting for machine to come up
	I1008 19:07:34.622500  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:34.632627  585014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:34.654975  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:34.667824  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:34.667853  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:34.667863  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:34.667874  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:34.667879  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:34.667884  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:34.667890  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:34.667899  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:34.667904  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:34.667910  585014 system_pods.go:74] duration metric: took 12.913884ms to wait for pod list to return data ...
	I1008 19:07:34.667919  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:34.672996  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:34.673018  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:34.673029  585014 node_conditions.go:105] duration metric: took 5.105827ms to run NodePressure ...
	I1008 19:07:34.673045  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:34.992309  585014 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996835  585014 kubeadm.go:739] kubelet initialised
	I1008 19:07:34.996861  585014 kubeadm.go:740] duration metric: took 4.524726ms waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996870  585014 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:35.005255  585014 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.012539  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012568  585014 pod_ready.go:82] duration metric: took 7.278613ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.012580  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012589  585014 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.018465  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018489  585014 pod_ready.go:82] duration metric: took 5.8848ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.018500  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018509  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.026503  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026533  585014 pod_ready.go:82] duration metric: took 8.012156ms for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.026544  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026555  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.058419  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058449  585014 pod_ready.go:82] duration metric: took 31.879605ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.058463  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058471  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.458244  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458275  585014 pod_ready.go:82] duration metric: took 399.794285ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.458286  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458292  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.858567  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858612  585014 pod_ready.go:82] duration metric: took 400.312425ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.858625  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858637  585014 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:36.258490  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258520  585014 pod_ready.go:82] duration metric: took 399.870797ms for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:36.258530  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258538  585014 pod_ready.go:39] duration metric: took 1.261659261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:36.258558  585014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:07:36.269993  585014 ops.go:34] apiserver oom_adj: -16
	I1008 19:07:36.270016  585014 kubeadm.go:597] duration metric: took 9.593365367s to restartPrimaryControlPlane
	I1008 19:07:36.270025  585014 kubeadm.go:394] duration metric: took 9.645330227s to StartCluster
	I1008 19:07:36.270044  585014 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.270125  585014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:07:36.271682  585014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.271945  585014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:07:36.272024  585014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:07:36.272130  585014 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-783146"
	I1008 19:07:36.272158  585014 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-783146"
	W1008 19:07:36.272166  585014 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:07:36.272152  585014 addons.go:69] Setting default-storageclass=true in profile "embed-certs-783146"
	I1008 19:07:36.272179  585014 addons.go:69] Setting metrics-server=true in profile "embed-certs-783146"
	I1008 19:07:36.272198  585014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-783146"
	I1008 19:07:36.272203  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272213  585014 addons.go:234] Setting addon metrics-server=true in "embed-certs-783146"
	W1008 19:07:36.272224  585014 addons.go:243] addon metrics-server should already be in state true
	I1008 19:07:36.272256  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272187  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:36.272616  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272638  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272658  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272689  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272694  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272738  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.274263  585014 out.go:177] * Verifying Kubernetes components...
	I1008 19:07:36.275444  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:36.288219  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1008 19:07:36.288686  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.289297  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.289328  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.289721  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.290415  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.290462  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.293043  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1008 19:07:36.293374  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I1008 19:07:36.293461  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293721  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293954  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.293978  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294188  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.294212  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294299  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294504  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.294534  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294982  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.295028  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.297638  585014 addons.go:234] Setting addon default-storageclass=true in "embed-certs-783146"
	W1008 19:07:36.297661  585014 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:07:36.297692  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.298042  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.298081  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.309286  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1008 19:07:36.309776  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310024  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1008 19:07:36.310337  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310360  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.310478  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310771  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.310980  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310997  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.311013  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.311330  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.311500  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.313004  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1008 19:07:36.313159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313368  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.313523  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313926  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.313951  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.314284  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.314777  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.314820  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.314992  585014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:07:36.315010  585014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:07:36.316168  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:07:36.316191  585014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:07:36.316212  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.316309  585014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.316333  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:07:36.316352  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.320088  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320418  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320566  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320591  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320733  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.320888  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320912  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320931  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321074  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.321181  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321235  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321400  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321397  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.321532  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.331532  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1008 19:07:36.331881  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.332309  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.332331  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.332724  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.332929  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.334589  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.334775  585014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.334797  585014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:07:36.334811  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.337675  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.338093  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338209  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.338380  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.338491  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.338600  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.444532  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:36.462719  585014 node_ready.go:35] waiting up to 6m0s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:36.519485  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.613714  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:07:36.613738  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:07:36.637773  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.645883  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:07:36.645907  585014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:07:36.685924  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.685952  585014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:07:36.710461  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.970231  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970256  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970563  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970589  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970599  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970606  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970860  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970881  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970892  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980520  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.980538  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.980826  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980869  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.980888  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.676577  585014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038767196s)
	I1008 19:07:37.676633  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.676646  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.676972  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.676982  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677040  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677058  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.677075  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.677333  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677351  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677375  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689600  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689615  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.689883  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.689897  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689901  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.689917  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689934  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.690210  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.690227  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.690240  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.690256  585014 addons.go:475] Verifying addon metrics-server=true in "embed-certs-783146"
	I1008 19:07:37.692035  585014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1008 19:07:36.383659  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.383993  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.384026  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:36.383939  586083 retry.go:31] will retry after 3.325274435s: waiting for machine to come up
	I1008 19:07:39.713420  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.713902  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Found IP for machine: 192.168.50.213
	I1008 19:07:39.713926  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserving static IP address...
	I1008 19:07:39.713945  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has current primary IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.714332  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.714362  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserved static IP address: 192.168.50.213
	I1008 19:07:39.714382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | skip adding static IP to network mk-default-k8s-diff-port-142496 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"}
	I1008 19:07:39.714401  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Getting to WaitForSSH function...
	I1008 19:07:39.714415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for SSH to be available...
	I1008 19:07:39.716542  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.716905  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.716951  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.717025  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH client type: external
	I1008 19:07:39.717052  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa (-rw-------)
	I1008 19:07:39.717111  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:39.717147  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | About to run SSH command:
	I1008 19:07:39.717165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | exit 0
	I1008 19:07:39.842089  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:39.842499  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetConfigRaw
	I1008 19:07:39.843125  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:39.845604  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.845976  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.846008  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.846276  585096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:07:39.846509  585096 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:39.846541  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:39.846768  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.849107  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849411  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.849435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849743  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.849924  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850084  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850236  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.850422  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.850679  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.850695  585096 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:39.950481  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:39.950507  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.950796  585096 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142496"
	I1008 19:07:39.950825  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.951016  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.953300  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.953678  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953833  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.954002  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954168  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954297  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.954450  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.954621  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.954636  585096 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142496 && echo "default-k8s-diff-port-142496" | sudo tee /etc/hostname
	I1008 19:07:40.068848  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142496
	
	I1008 19:07:40.068876  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.071855  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072195  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.072226  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072392  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.072563  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072746  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072871  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.073039  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.073237  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.073257  585096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:40.183039  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:40.183073  585096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:40.183116  585096 buildroot.go:174] setting up certificates
	I1008 19:07:40.183131  585096 provision.go:84] configureAuth start
	I1008 19:07:40.183146  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:40.183451  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:40.185904  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186264  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.186284  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186453  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.188672  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.189037  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189134  585096 provision.go:143] copyHostCerts
	I1008 19:07:40.189204  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:40.189217  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:40.189281  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:40.189427  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:40.189441  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:40.189474  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:40.189563  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:40.189573  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:40.189600  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:40.189679  585096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142496 san=[127.0.0.1 192.168.50.213 default-k8s-diff-port-142496 localhost minikube]
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:37.693230  585014 addons.go:510] duration metric: took 1.421218857s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1008 19:07:38.466754  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:40.967492  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:40.418969  585096 provision.go:177] copyRemoteCerts
	I1008 19:07:40.419032  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:40.419060  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.421382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421701  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.421730  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421912  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.422108  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.422287  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.422426  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.500533  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:40.524199  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 19:07:40.547495  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:07:40.570656  585096 provision.go:87] duration metric: took 387.509086ms to configureAuth
	I1008 19:07:40.570687  585096 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:40.570859  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:40.570934  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.573578  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.573941  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.573970  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.574088  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.574290  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574534  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574680  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.574881  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.575056  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.575074  585096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:40.795575  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:40.795604  585096 machine.go:96] duration metric: took 949.073836ms to provisionDockerMachine
	I1008 19:07:40.795618  585096 start.go:293] postStartSetup for "default-k8s-diff-port-142496" (driver="kvm2")
	I1008 19:07:40.795629  585096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:40.795646  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:40.796003  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:40.796042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.798307  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798635  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.798666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798881  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.799039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.799249  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.799369  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.880470  585096 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:40.884632  585096 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:40.884660  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:40.884719  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:40.884834  585096 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:40.884947  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:40.893828  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:40.917278  585096 start.go:296] duration metric: took 121.644332ms for postStartSetup
	I1008 19:07:40.917320  585096 fix.go:56] duration metric: took 19.950206082s for fixHost
	I1008 19:07:40.917342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.919971  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920315  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.920342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920539  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.920782  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.920969  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.921114  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.921292  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.921519  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.921535  585096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:41.022573  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414460.977520721
	
	I1008 19:07:41.022596  585096 fix.go:216] guest clock: 1728414460.977520721
	I1008 19:07:41.022603  585096 fix.go:229] Guest: 2024-10-08 19:07:40.977520721 +0000 UTC Remote: 2024-10-08 19:07:40.917324605 +0000 UTC m=+230.557951471 (delta=60.196116ms)
	I1008 19:07:41.022627  585096 fix.go:200] guest clock delta is within tolerance: 60.196116ms
	I1008 19:07:41.022634  585096 start.go:83] releasing machines lock for "default-k8s-diff-port-142496", held for 20.055558507s
	I1008 19:07:41.022665  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.022896  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:41.025861  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026272  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.026301  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026479  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027126  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027537  585096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:41.027581  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.027725  585096 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:41.027749  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.030474  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.030745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031094  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031123  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031148  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031322  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031511  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031572  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031827  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.031883  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.135368  585096 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:41.141492  585096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:41.288617  585096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:41.295482  585096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:41.295550  585096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:41.310709  585096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:41.310738  585096 start.go:495] detecting cgroup driver to use...
	I1008 19:07:41.310821  585096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:41.328574  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:41.342506  585096 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:41.342564  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:41.356308  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:41.372510  585096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:41.497084  585096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:41.665187  585096 docker.go:233] disabling docker service ...
	I1008 19:07:41.665272  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:41.682309  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:41.702567  585096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:41.882727  585096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:42.006479  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:42.020474  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:42.039750  585096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:42.039834  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.050395  585096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:42.050449  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.060572  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.071974  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.083208  585096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:42.097166  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.110090  585096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.128424  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.139296  585096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:42.148278  585096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:42.148320  585096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:42.164007  585096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:42.173218  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:42.303890  585096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:42.412074  585096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:42.412155  585096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:42.418606  585096 start.go:563] Will wait 60s for crictl version
	I1008 19:07:42.418662  585096 ssh_runner.go:195] Run: which crictl
	I1008 19:07:42.422670  585096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:42.469322  585096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:42.469432  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.501089  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.530412  585096 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:42.531554  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:42.534587  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.534928  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:42.534968  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.535235  585096 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:42.539279  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:42.552259  585096 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:42.552380  585096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:42.552447  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:42.588849  585096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:42.588928  585096 ssh_runner.go:195] Run: which lz4
	I1008 19:07:42.592785  585096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:42.597089  585096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:42.597119  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:44.003959  585096 crio.go:462] duration metric: took 1.411213503s to copy over tarball
	I1008 19:07:44.004075  585096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:43.467315  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:43.975147  585014 node_ready.go:49] node "embed-certs-783146" has status "Ready":"True"
	I1008 19:07:43.975180  585014 node_ready.go:38] duration metric: took 7.512429362s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:43.975194  585014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:43.982537  585014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999539  585014 pod_ready.go:93] pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:43.999566  585014 pod_ready.go:82] duration metric: took 16.995034ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999578  585014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506007  585014 pod_ready.go:93] pod "etcd-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:44.506032  585014 pod_ready.go:82] duration metric: took 506.447262ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506043  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.159478  585096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155358377s)
	I1008 19:07:46.159512  585096 crio.go:469] duration metric: took 2.155494994s to extract the tarball
	I1008 19:07:46.159532  585096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:46.196073  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:46.239224  585096 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:46.239253  585096 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:46.239263  585096 kubeadm.go:934] updating node { 192.168.50.213 8444 v1.31.1 crio true true} ...
	I1008 19:07:46.239412  585096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:46.239482  585096 ssh_runner.go:195] Run: crio config
	I1008 19:07:46.284916  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:46.284941  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:46.284959  585096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:46.284980  585096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142496 NodeName:default-k8s-diff-port-142496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:46.285145  585096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142496"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:46.285218  585096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:46.295176  585096 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:46.295278  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:46.304340  585096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1008 19:07:46.320234  585096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:46.336215  585096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 19:07:46.352435  585096 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:46.355991  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:46.367424  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:46.491070  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:46.509165  585096 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496 for IP: 192.168.50.213
	I1008 19:07:46.509192  585096 certs.go:194] generating shared ca certs ...
	I1008 19:07:46.509213  585096 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:46.509413  585096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:46.509488  585096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:46.509507  585096 certs.go:256] generating profile certs ...
	I1008 19:07:46.509642  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/client.key
	I1008 19:07:46.509724  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key.8b79a92b
	I1008 19:07:46.509806  585096 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key
	I1008 19:07:46.510014  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:46.510069  585096 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:46.510082  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:46.510109  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:46.510154  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:46.510177  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:46.510220  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:46.510965  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:46.548979  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:46.588042  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:46.617201  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:46.645499  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 19:07:46.673075  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:46.705336  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:46.727739  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:07:46.755352  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:46.782421  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:46.804813  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:46.827321  585096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:46.843375  585096 ssh_runner.go:195] Run: openssl version
	I1008 19:07:46.848936  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:46.860851  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865320  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865379  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.871107  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:46.881518  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:46.891868  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.895991  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.896026  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.901219  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:46.914282  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:46.925095  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929407  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929465  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.934778  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:46.946807  585096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:46.951173  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:46.957072  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:46.962822  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:46.968584  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:46.974679  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:46.980081  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:46.985537  585096 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:46.985659  585096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:46.985706  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.025838  585096 cri.go:89] found id: ""
	I1008 19:07:47.025924  585096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:47.037778  585096 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:47.037800  585096 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:47.037847  585096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:47.049787  585096 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:47.050778  585096 kubeconfig.go:125] found "default-k8s-diff-port-142496" server: "https://192.168.50.213:8444"
	I1008 19:07:47.052921  585096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:47.062696  585096 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I1008 19:07:47.062747  585096 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:47.062775  585096 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:47.062822  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.101981  585096 cri.go:89] found id: ""
	I1008 19:07:47.102054  585096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:47.119421  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:47.129168  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:47.129189  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:47.129253  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:07:47.138071  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:47.138125  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:47.147202  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:07:47.155923  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:47.155979  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:47.164829  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.173366  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:47.173413  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.182417  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:07:47.191170  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:47.191228  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:47.200115  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:47.209146  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:47.314572  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.318198  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003546788s)
	I1008 19:07:48.318245  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.533505  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.617977  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.743670  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:48.743782  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.244765  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.744287  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.243920  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:46.513648  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:49.013409  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:50.422334  585014 pod_ready.go:93] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.422364  585014 pod_ready.go:82] duration metric: took 5.916314463s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.422379  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929739  585014 pod_ready.go:93] pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.929775  585014 pod_ready.go:82] duration metric: took 507.386631ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929790  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935612  585014 pod_ready.go:93] pod "kube-proxy-9l7t7" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.935638  585014 pod_ready.go:82] duration metric: took 5.84081ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935650  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941106  585014 pod_ready.go:93] pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.941131  585014 pod_ready.go:82] duration metric: took 5.47259ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941143  585014 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:50.744524  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.830124  585096 api_server.go:72] duration metric: took 2.086461343s to wait for apiserver process to appear ...
	I1008 19:07:50.830161  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:50.830192  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:50.830915  585096 api_server.go:269] stopped: https://192.168.50.213:8444/healthz: Get "https://192.168.50.213:8444/healthz": dial tcp 192.168.50.213:8444: connect: connection refused
	I1008 19:07:51.331031  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.027442  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.027468  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.027483  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.101043  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.101073  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.330385  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.335009  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.335035  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:54.830407  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.835912  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.835939  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:55.330454  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:55.336271  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:07:55.343556  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:55.343586  585096 api_server.go:131] duration metric: took 4.513416619s to wait for apiserver health ...
	I1008 19:07:55.343604  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:55.343612  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:55.345259  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:55.346612  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:55.357899  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:55.383903  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:52.948407  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:55.449059  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:07:55.397903  585096 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:55.397935  585096 system_pods.go:61] "coredns-7c65d6cfc9-tkg8j" [0b436a1f-2b8e-4a5f-8063-695480275f2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:55.397944  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [cc702ae5-7e74-4a18-942e-1d236d39c43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:55.397952  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [da72d2f3-aab5-42c3-9733-7c0ce470e61e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:55.397959  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [de964717-b4de-4c7c-a9b5-164e7a048d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:55.397966  585096 system_pods.go:61] "kube-proxy-lwggr" [d5d96599-c3d3-4eba-a2ad-0c027e8ef1ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:55.397971  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [9218d69d-97ca-4680-856b-95c43fa371ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:55.397976  585096 system_pods.go:61] "metrics-server-6867b74b74-pfc2c" [9bafd6da-a33e-4182-a0d7-5e4c9473f057] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:55.397982  585096 system_pods.go:61] "storage-provisioner" [b60980ab-2552-404e-b351-4b163a075732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:55.397988  585096 system_pods.go:74] duration metric: took 14.056648ms to wait for pod list to return data ...
	I1008 19:07:55.397997  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:55.403870  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:55.403906  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:55.403920  585096 node_conditions.go:105] duration metric: took 5.917994ms to run NodePressure ...
	I1008 19:07:55.403941  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:55.677555  585096 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682514  585096 kubeadm.go:739] kubelet initialised
	I1008 19:07:55.682539  585096 kubeadm.go:740] duration metric: took 4.953783ms waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682550  585096 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:55.688641  585096 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:57.695361  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.195582  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:57.948167  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.446946  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.834691  584371 start.go:364] duration metric: took 54.903126319s to acquireMachinesLock for "no-preload-966632"
	I1008 19:08:01.834745  584371 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:08:01.834753  584371 fix.go:54] fixHost starting: 
	I1008 19:08:01.835158  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:01.835200  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:01.854850  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1008 19:08:01.855220  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:01.855740  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:01.855770  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:01.856201  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:01.856428  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:01.856587  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:01.857921  584371 fix.go:112] recreateIfNeeded on no-preload-966632: state=Stopped err=<nil>
	I1008 19:08:01.857943  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	W1008 19:08:01.858110  584371 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:08:01.859994  584371 out.go:177] * Restarting existing kvm2 VM for "no-preload-966632" ...
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:01.861355  584371 main.go:141] libmachine: (no-preload-966632) Calling .Start
	I1008 19:08:01.861539  584371 main.go:141] libmachine: (no-preload-966632) Ensuring networks are active...
	I1008 19:08:01.862455  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network default is active
	I1008 19:08:01.862878  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network mk-no-preload-966632 is active
	I1008 19:08:01.863368  584371 main.go:141] libmachine: (no-preload-966632) Getting domain xml...
	I1008 19:08:01.864106  584371 main.go:141] libmachine: (no-preload-966632) Creating domain...
	I1008 19:08:03.179854  584371 main.go:141] libmachine: (no-preload-966632) Waiting to get IP...
	I1008 19:08:03.180838  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.181232  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.181301  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.181206  586496 retry.go:31] will retry after 229.567854ms: waiting for machine to come up
	I1008 19:08:03.412710  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.413201  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.413225  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.413170  586496 retry.go:31] will retry after 361.675143ms: waiting for machine to come up
	I1008 19:08:03.776466  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.777140  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.777184  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.777047  586496 retry.go:31] will retry after 323.194852ms: waiting for machine to come up
	I1008 19:08:04.101865  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.102357  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.102388  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.102310  586496 retry.go:31] will retry after 484.995282ms: waiting for machine to come up
	I1008 19:08:02.698935  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:05.195930  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:02.447582  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:04.450889  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:04.589063  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.589752  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.589780  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.589706  586496 retry.go:31] will retry after 543.703113ms: waiting for machine to come up
	I1008 19:08:05.135522  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.135997  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.136023  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.135944  586496 retry.go:31] will retry after 617.479763ms: waiting for machine to come up
	I1008 19:08:05.754978  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.755541  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.755568  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.755486  586496 retry.go:31] will retry after 849.017716ms: waiting for machine to come up
	I1008 19:08:06.606621  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:06.607072  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:06.607105  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:06.607023  586496 retry.go:31] will retry after 1.133489837s: waiting for machine to come up
	I1008 19:08:07.742713  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:07.743299  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:07.743329  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:07.743252  586496 retry.go:31] will retry after 1.797316795s: waiting for machine to come up
	I1008 19:08:07.196317  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.698409  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.698443  585096 pod_ready.go:82] duration metric: took 12.009772792s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.698475  585096 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.708991  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.709015  585096 pod_ready.go:82] duration metric: took 10.527401ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.709028  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714343  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.714369  585096 pod_ready.go:82] duration metric: took 5.331417ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714383  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.118973  585096 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:06.948829  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:09.448376  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:09.541879  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:09.542340  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:09.542372  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:09.542288  586496 retry.go:31] will retry after 2.238590286s: waiting for machine to come up
	I1008 19:08:11.783440  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:11.783909  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:11.783945  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:11.783858  586496 retry.go:31] will retry after 2.226110801s: waiting for machine to come up
	I1008 19:08:14.012103  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:14.012538  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:14.012561  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:14.012493  586496 retry.go:31] will retry after 2.298206633s: waiting for machine to come up
	I1008 19:08:10.849833  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.849856  585096 pod_ready.go:82] duration metric: took 3.13546554s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.849868  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858341  585096 pod_ready.go:93] pod "kube-proxy-lwggr" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.858367  585096 pod_ready.go:82] duration metric: took 8.492572ms for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858379  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865890  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.865909  585096 pod_ready.go:82] duration metric: took 7.521945ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865918  585096 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:12.873861  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:15.372408  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.450482  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:13.948331  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.313868  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:16.314460  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:16.314484  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:16.314424  586496 retry.go:31] will retry after 3.672085858s: waiting for machine to come up
	I1008 19:08:17.872689  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.372637  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.448090  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:18.947580  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.948804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.989014  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989556  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has current primary IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989576  584371 main.go:141] libmachine: (no-preload-966632) Found IP for machine: 192.168.61.141
	I1008 19:08:19.989589  584371 main.go:141] libmachine: (no-preload-966632) Reserving static IP address...
	I1008 19:08:19.990000  584371 main.go:141] libmachine: (no-preload-966632) Reserved static IP address: 192.168.61.141
	I1008 19:08:19.990036  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.990048  584371 main.go:141] libmachine: (no-preload-966632) Waiting for SSH to be available...
	I1008 19:08:19.990068  584371 main.go:141] libmachine: (no-preload-966632) DBG | skip adding static IP to network mk-no-preload-966632 - found existing host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"}
	I1008 19:08:19.990076  584371 main.go:141] libmachine: (no-preload-966632) DBG | Getting to WaitForSSH function...
	I1008 19:08:19.992644  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.992970  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.993010  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.993081  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH client type: external
	I1008 19:08:19.993104  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa (-rw-------)
	I1008 19:08:19.993136  584371 main.go:141] libmachine: (no-preload-966632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:19.993152  584371 main.go:141] libmachine: (no-preload-966632) DBG | About to run SSH command:
	I1008 19:08:19.993174  584371 main.go:141] libmachine: (no-preload-966632) DBG | exit 0
	I1008 19:08:20.118205  584371 main.go:141] libmachine: (no-preload-966632) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:20.118616  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetConfigRaw
	I1008 19:08:20.119326  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.122203  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122678  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.122708  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122926  584371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 19:08:20.123144  584371 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:20.123164  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:20.123360  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.125759  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126083  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.126108  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126265  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.126442  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.126980  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.127189  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.127201  584371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:20.234458  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:20.234491  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.234781  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:08:20.234811  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.235044  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.237673  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.237993  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.238016  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.238221  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.238418  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238612  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238806  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.238981  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.239176  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.239203  584371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-966632 && echo "no-preload-966632" | sudo tee /etc/hostname
	I1008 19:08:20.360621  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-966632
	
	I1008 19:08:20.360649  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.363600  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.363909  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.363947  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.364166  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.364297  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364426  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364510  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.364630  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.364855  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.364881  584371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-966632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-966632/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-966632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:20.483101  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:20.483131  584371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:20.483149  584371 buildroot.go:174] setting up certificates
	I1008 19:08:20.483161  584371 provision.go:84] configureAuth start
	I1008 19:08:20.483171  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.483429  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.486467  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.486838  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.486871  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.487037  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.489207  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489531  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.489557  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489655  584371 provision.go:143] copyHostCerts
	I1008 19:08:20.489726  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:20.489737  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:20.489803  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:20.489927  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:20.489939  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:20.489987  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:20.490072  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:20.490083  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:20.490110  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:20.490231  584371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.no-preload-966632 san=[127.0.0.1 192.168.61.141 localhost minikube no-preload-966632]
	I1008 19:08:20.618050  584371 provision.go:177] copyRemoteCerts
	I1008 19:08:20.618117  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:20.618149  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.621118  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621458  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.621485  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621670  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.621875  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.622056  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.622224  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:20.704439  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:20.730441  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:08:20.755072  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:08:20.777513  584371 provision.go:87] duration metric: took 294.340685ms to configureAuth
	I1008 19:08:20.777550  584371 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:20.777774  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:08:20.777873  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.780540  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.780956  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.780995  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.781185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.781423  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781615  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.781989  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.782179  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.782203  584371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:21.003896  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:21.003925  584371 machine.go:96] duration metric: took 880.766243ms to provisionDockerMachine
	I1008 19:08:21.003940  584371 start.go:293] postStartSetup for "no-preload-966632" (driver="kvm2")
	I1008 19:08:21.003955  584371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:21.003974  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.004286  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:21.004312  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.007138  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007472  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.007500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007610  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.007820  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.007991  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.008163  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.093075  584371 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:21.097048  584371 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:21.097076  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:21.097160  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:21.097254  584371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:21.097370  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:21.106698  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:21.130484  584371 start.go:296] duration metric: took 126.530716ms for postStartSetup
	I1008 19:08:21.130526  584371 fix.go:56] duration metric: took 19.295774496s for fixHost
	I1008 19:08:21.130550  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.133361  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.133717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.133744  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.134048  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.134269  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134525  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134710  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.134888  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:21.135119  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:21.135135  584371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:21.242740  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414501.194174379
	
	I1008 19:08:21.242765  584371 fix.go:216] guest clock: 1728414501.194174379
	I1008 19:08:21.242776  584371 fix.go:229] Guest: 2024-10-08 19:08:21.194174379 +0000 UTC Remote: 2024-10-08 19:08:21.130530022 +0000 UTC m=+356.786912807 (delta=63.644357ms)
	I1008 19:08:21.242823  584371 fix.go:200] guest clock delta is within tolerance: 63.644357ms
	I1008 19:08:21.242835  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 19.408108613s
	I1008 19:08:21.242857  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.243112  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:21.245967  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246378  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.246409  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246731  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247314  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247500  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247588  584371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:21.247640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.247706  584371 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:21.247731  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.250191  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250228  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250665  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250694  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250729  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250789  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250948  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250962  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251129  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251314  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.251334  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251462  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.353600  584371 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:21.360031  584371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:21.502001  584371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:21.508846  584371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:21.508938  584371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:21.524597  584371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:21.524626  584371 start.go:495] detecting cgroup driver to use...
	I1008 19:08:21.524699  584371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:21.541500  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:21.553886  584371 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:21.553943  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:21.567027  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:21.579965  584371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:21.692823  584371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:21.844393  584371 docker.go:233] disabling docker service ...
	I1008 19:08:21.844461  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:21.860471  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:21.873229  584371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:22.003106  584371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:22.129301  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:22.143314  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:22.161423  584371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:08:22.161494  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.171355  584371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:22.171429  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.180962  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.190212  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.199737  584371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:22.209488  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.219051  584371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.235430  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.245007  584371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:22.253705  584371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:22.253748  584371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:22.265343  584371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:22.275245  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:22.380960  584371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:22.471004  584371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:22.471067  584371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:22.475520  584371 start.go:563] Will wait 60s for crictl version
	I1008 19:08:22.475598  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.479271  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:22.523709  584371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:22.523787  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.551307  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.579271  584371 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:08:22.580608  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:22.583417  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583783  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:22.583825  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583991  584371 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:22.587937  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:22.600324  584371 kubeadm.go:883] updating cluster {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:22.600465  584371 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:08:22.600506  584371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:22.641111  584371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:08:22.641139  584371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:22.641194  584371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.641224  584371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.641284  584371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.641307  584371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.641377  584371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.641407  584371 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 19:08:22.641742  584371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642057  584371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.642568  584371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.642576  584371 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 19:08:22.642669  584371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.642876  584371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.642894  584371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.643310  584371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.799972  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.811504  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.815340  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.815659  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.817303  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.858380  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.864688  584371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1008 19:08:22.864727  584371 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.864762  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.877332  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1008 19:08:22.934971  584371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1008 19:08:22.935035  584371 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.935085  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945549  584371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1008 19:08:22.945594  584371 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.945644  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945645  584371 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1008 19:08:22.945683  584371 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.945685  584371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1008 19:08:22.945730  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945733  584371 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.945796  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981887  584371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1008 19:08:22.982012  584371 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.982059  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981954  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.082208  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.082210  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.082304  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.082411  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.082430  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.082543  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.178344  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.196633  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.196665  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.196733  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.209763  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.209830  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.310142  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.317659  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.317731  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.327221  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.331490  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.346298  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 19:08:23.346412  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.435656  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 19:08:23.435679  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1008 19:08:23.435783  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:23.435788  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:23.441591  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 19:08:23.441673  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:23.441696  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 19:08:23.441782  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 19:08:23.441814  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:23.441856  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:23.441901  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1008 19:08:23.441918  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.441947  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.445597  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1008 19:08:23.445630  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1008 19:08:23.449022  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1008 19:08:23.450009  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.373452  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:24.872600  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:23.448074  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:25.449287  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.950280  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.508431356s)
	I1008 19:08:25.950340  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.508402491s)
	I1008 19:08:25.950344  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1008 19:08:25.950357  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1008 19:08:25.950545  584371 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.50050623s)
	I1008 19:08:25.950600  584371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1008 19:08:25.950611  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.508516442s)
	I1008 19:08:25.950637  584371 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:25.950648  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1008 19:08:25.950680  584371 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:25.950688  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:25.950727  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:29.225357  584371 ssh_runner.go:235] Completed: which crictl: (3.274648192s)
	I1008 19:08:29.225514  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:29.225532  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.27477814s)
	I1008 19:08:29.225561  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1008 19:08:29.225593  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:29.225627  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:27.373617  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.374173  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:27.948313  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.948750  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.696201  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.470655089s)
	I1008 19:08:30.696255  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.470604601s)
	I1008 19:08:30.696284  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1008 19:08:30.696296  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:30.696317  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.696365  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.740520  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:32.685896  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.989500601s)
	I1008 19:08:32.685941  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1008 19:08:32.685971  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.685971  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.945412846s)
	I1008 19:08:32.686046  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.686045  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 19:08:32.686186  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:31.872718  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:33.873665  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:32.447765  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:34.948257  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.663874  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.977781248s)
	I1008 19:08:34.663914  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1008 19:08:34.663939  584371 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:34.663942  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.977724244s)
	I1008 19:08:34.663973  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1008 19:08:34.663991  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:36.833283  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.169263327s)
	I1008 19:08:36.833320  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1008 19:08:36.833353  584371 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:36.833417  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:37.485901  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 19:08:37.485954  584371 cache_images.go:123] Successfully loaded all cached images
	I1008 19:08:37.485961  584371 cache_images.go:92] duration metric: took 14.844810749s to LoadCachedImages
	I1008 19:08:37.485973  584371 kubeadm.go:934] updating node { 192.168.61.141 8443 v1.31.1 crio true true} ...
	I1008 19:08:37.486084  584371 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-966632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:37.486149  584371 ssh_runner.go:195] Run: crio config
	I1008 19:08:37.544511  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:37.544535  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:37.544554  584371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:37.544576  584371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-966632 NodeName:no-preload-966632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:08:37.544718  584371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-966632"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:37.544792  584371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:08:37.556979  584371 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:37.557049  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:37.566249  584371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 19:08:37.583303  584371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:37.599535  584371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1008 19:08:37.616315  584371 ssh_runner.go:195] Run: grep 192.168.61.141	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:37.620089  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:37.632181  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:37.748647  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:37.765577  584371 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632 for IP: 192.168.61.141
	I1008 19:08:37.765600  584371 certs.go:194] generating shared ca certs ...
	I1008 19:08:37.765619  584371 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:37.765829  584371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:37.765890  584371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:37.765904  584371 certs.go:256] generating profile certs ...
	I1008 19:08:37.766020  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.key
	I1008 19:08:37.766095  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key.a515ed11
	I1008 19:08:37.766143  584371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key
	I1008 19:08:37.766334  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:37.766383  584371 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:37.766398  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:37.766430  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:37.766467  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:37.766501  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:37.766562  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:37.767588  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:37.804400  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:37.837466  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:37.865516  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:37.894827  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:08:37.918668  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:08:37.948238  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:37.974152  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:08:37.997284  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:38.019295  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:38.043392  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:38.067971  584371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:38.084940  584371 ssh_runner.go:195] Run: openssl version
	I1008 19:08:38.090779  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:38.102715  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107292  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107355  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.113456  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:38.123904  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:38.134337  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138503  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138561  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.143902  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:38.155393  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:38.167107  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171433  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171480  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.176968  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:38.188437  584371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:38.192733  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:38.198531  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:38.204187  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:38.210522  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:38.216328  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:38.222077  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:38.227724  584371 kubeadm.go:392] StartCluster: {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:38.227802  584371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:38.227882  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.262461  584371 cri.go:89] found id: ""
	I1008 19:08:38.262532  584371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:38.272591  584371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:38.272612  584371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:38.272677  584371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:38.282621  584371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:38.283683  584371 kubeconfig.go:125] found "no-preload-966632" server: "https://192.168.61.141:8443"
	I1008 19:08:38.286019  584371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:38.295315  584371 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.141
	I1008 19:08:38.295344  584371 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:38.295357  584371 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:38.295400  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.329462  584371 cri.go:89] found id: ""
	I1008 19:08:38.329533  584371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:38.345901  584371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:38.354899  584371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:38.354920  584371 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:38.354965  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:38.363242  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:38.363282  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:38.373063  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:38.381479  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:38.381530  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:38.390679  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.400033  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:38.400071  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.409308  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:38.417842  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:38.417876  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:38.427251  584371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:38.437010  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:38.562381  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.344247  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:36.372911  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:38.872768  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:37.448043  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:39.956579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.550458  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.619345  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.718016  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:39.718126  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.218974  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.719108  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.741178  584371 api_server.go:72] duration metric: took 1.023163924s to wait for apiserver process to appear ...
	I1008 19:08:40.741210  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:08:40.741235  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:40.741767  584371 api_server.go:269] stopped: https://192.168.61.141:8443/healthz: Get "https://192.168.61.141:8443/healthz": dial tcp 192.168.61.141:8443: connect: connection refused
	I1008 19:08:41.241356  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.787235  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:08:43.787284  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:08:43.787306  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.914606  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:43.914653  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:44.242033  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.247068  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.247097  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:40.873394  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:43.373475  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:42.446900  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:44.447141  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.742212  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.756340  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.756371  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.241997  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.246343  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.246367  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.741898  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.749274  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.749301  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.241889  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.246127  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.246155  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.741694  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.746192  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.746219  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:47.242250  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:47.246571  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:08:47.252812  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:08:47.252843  584371 api_server.go:131] duration metric: took 6.511626175s to wait for apiserver health ...
	I1008 19:08:47.252852  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:47.252858  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:47.254723  584371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:08:47.255933  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:08:47.266073  584371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:08:47.284042  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:08:47.293401  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:08:47.293432  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:08:47.293439  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:08:47.293450  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:08:47.293456  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:08:47.293464  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:08:47.293469  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:08:47.293474  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:08:47.293478  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:08:47.293484  584371 system_pods.go:74] duration metric: took 9.422158ms to wait for pod list to return data ...
	I1008 19:08:47.293493  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:08:47.296923  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:08:47.296947  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:08:47.296960  584371 node_conditions.go:105] duration metric: took 3.462212ms to run NodePressure ...
	I1008 19:08:47.296979  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:47.562271  584371 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566914  584371 kubeadm.go:739] kubelet initialised
	I1008 19:08:47.566938  584371 kubeadm.go:740] duration metric: took 4.63692ms waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566950  584371 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:47.571271  584371 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.575633  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575659  584371 pod_ready.go:82] duration metric: took 4.364181ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.575671  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575680  584371 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.579443  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579465  584371 pod_ready.go:82] duration metric: took 3.775248ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.579475  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579483  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.583747  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583775  584371 pod_ready.go:82] duration metric: took 4.277306ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.583785  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583797  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.687618  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687652  584371 pod_ready.go:82] duration metric: took 103.843425ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.687663  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687669  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.087568  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087601  584371 pod_ready.go:82] duration metric: took 399.92202ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.087613  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087622  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.487223  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487256  584371 pod_ready.go:82] duration metric: took 399.625038ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.487269  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487278  584371 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.887764  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887798  584371 pod_ready.go:82] duration metric: took 400.504473ms for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.887812  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887821  584371 pod_ready.go:39] duration metric: took 1.320859293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:48.887842  584371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:08:48.901255  584371 ops.go:34] apiserver oom_adj: -16
	I1008 19:08:48.901279  584371 kubeadm.go:597] duration metric: took 10.628659432s to restartPrimaryControlPlane
	I1008 19:08:48.901290  584371 kubeadm.go:394] duration metric: took 10.673572592s to StartCluster
	I1008 19:08:48.901313  584371 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.901397  584371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:48.904024  584371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.904361  584371 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:08:48.904455  584371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:08:48.904549  584371 addons.go:69] Setting storage-provisioner=true in profile "no-preload-966632"
	I1008 19:08:48.904565  584371 addons.go:69] Setting default-storageclass=true in profile "no-preload-966632"
	I1008 19:08:48.904594  584371 addons.go:234] Setting addon storage-provisioner=true in "no-preload-966632"
	W1008 19:08:48.904603  584371 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:08:48.904603  584371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-966632"
	I1008 19:08:48.904574  584371 addons.go:69] Setting metrics-server=true in profile "no-preload-966632"
	I1008 19:08:48.904646  584371 addons.go:234] Setting addon metrics-server=true in "no-preload-966632"
	I1008 19:08:48.904651  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.904652  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1008 19:08:48.904670  584371 addons.go:243] addon metrics-server should already be in state true
	I1008 19:08:48.904705  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.905079  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905116  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905133  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905151  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905159  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905205  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.906774  584371 out.go:177] * Verifying Kubernetes components...
	I1008 19:08:48.908138  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:48.942865  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1008 19:08:48.943612  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.944201  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.944232  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.944667  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.944748  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1008 19:08:48.945485  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.945526  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.945763  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.946464  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.946484  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.946530  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1008 19:08:48.946935  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.947052  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.947649  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.947693  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.948006  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.948027  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.948379  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.948602  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.951770  584371 addons.go:234] Setting addon default-storageclass=true in "no-preload-966632"
	W1008 19:08:48.951788  584371 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:08:48.951819  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.952055  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.952095  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.962422  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1008 19:08:48.962931  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.963509  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.963532  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.963908  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.964117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.965879  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.967812  584371 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:08:48.967853  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1008 19:08:48.967817  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1008 19:08:48.968376  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968436  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968885  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.968906  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.968964  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:08:48.968986  584371 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:08:48.969010  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.969290  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.969449  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.969472  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.969910  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.969941  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.970187  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.970430  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.972100  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972523  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.972544  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972677  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.972735  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.973016  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.973191  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.973323  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.974390  584371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:48.975651  584371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:48.975670  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:08:48.975686  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.978500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.978855  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.978876  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.979079  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.979474  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.979640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.979766  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.994846  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1008 19:08:48.995180  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.995592  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.995607  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.995976  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.996173  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.998270  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.998549  584371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:48.998568  584371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:08:48.998591  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:49.000647  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.000908  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:49.000924  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.001078  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:49.001185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:49.001282  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:49.001358  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:49.118217  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:49.138077  584371 node_ready.go:35] waiting up to 6m0s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:49.217300  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:49.241237  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:49.365395  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:08:49.365420  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:08:45.873500  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.373215  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:49.403596  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:08:49.403625  584371 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:08:49.438480  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:49.438540  584371 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:08:49.464366  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:50.474783  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.233506833s)
	I1008 19:08:50.474850  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474862  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.474914  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.257567473s)
	I1008 19:08:50.474955  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474964  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475191  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475206  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475215  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475221  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475280  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475289  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475297  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475303  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475310  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475441  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475454  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475582  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475596  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475628  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482003  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.482031  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.482315  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.482351  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482372  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.512902  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.048483922s)
	I1008 19:08:50.512957  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.512980  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513241  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513257  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513261  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513299  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.513307  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513534  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513552  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513561  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513577  584371 addons.go:475] Verifying addon metrics-server=true in "no-preload-966632"
	I1008 19:08:50.515302  584371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:08:46.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.448332  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:50.449239  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.516457  584371 addons.go:510] duration metric: took 1.612011936s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:08:51.141437  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:53.142166  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:54.141208  584371 node_ready.go:49] node "no-preload-966632" has status "Ready":"True"
	I1008 19:08:54.141238  584371 node_ready.go:38] duration metric: took 5.003121669s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:54.141251  584371 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:54.146685  584371 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151059  584371 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:54.151078  584371 pod_ready.go:82] duration metric: took 4.369406ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151086  584371 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:50.872416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:53.372230  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:52.947461  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:54.950183  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.157153  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.157458  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.658595  584371 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.658617  584371 pod_ready.go:82] duration metric: took 4.507524391s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.658627  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663785  584371 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.663811  584371 pod_ready.go:82] duration metric: took 5.176586ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663823  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668310  584371 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.668342  584371 pod_ready.go:82] duration metric: took 4.509914ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668356  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672380  584371 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.672397  584371 pod_ready.go:82] duration metric: took 4.034104ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672405  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676499  584371 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.676517  584371 pod_ready.go:82] duration metric: took 4.106343ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676527  584371 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:55.873069  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.372424  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:57.448182  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:59.947932  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.682583  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.682958  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:00.872650  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.872783  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:05.371825  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.447340  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:04.447504  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.683354  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.183362  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.183636  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.872083  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.874058  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.947502  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:08.948054  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.683833  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.188267  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:12.371967  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.372404  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.448009  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:13.948106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:15.948926  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:16.683453  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.184073  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:16.873511  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.372353  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:18.446864  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:20.449087  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:21.184209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.683535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:21.373869  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.872606  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:22.947224  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.947961  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:26.183587  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.184349  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:26.373049  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.873435  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.447254  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:29.447470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:30.683953  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.183412  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.371635  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.373244  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.448173  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.458099  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.947741  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:35.184447  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.682974  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.872226  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.872308  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:39.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:38.447494  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:40.947078  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:39.686907  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.183930  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.184443  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.372207  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.872360  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.947712  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:45.447011  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:46.682572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.683539  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.372618  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.373137  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.448127  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.947787  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:50.683784  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:53.183034  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.873417  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.373414  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.948470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.447675  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:55.184058  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.683581  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.872213  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.872323  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.447790  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.947901  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.948561  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:00.182341  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.183137  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:04.186590  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.872469  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.872708  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.372543  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.447536  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.947676  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.683572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.183605  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.373804  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.872253  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.947764  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.948705  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:11.184118  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:13.184173  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.372084  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.373845  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.447832  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.948882  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:15.682883  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.184458  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:16.374416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.872958  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:17.446820  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.448067  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:20.686987  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.182360  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.372104  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.872156  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.947427  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.950711  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:25.183749  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:27.184437  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.372132  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.372299  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.948117  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.948772  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:10:29.682860  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:31.683468  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:34.184018  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.872607  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:32.872784  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.373822  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:33.447573  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.447614  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:36.682470  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:38.683099  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.872270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.872490  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.450208  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.947418  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:40.683975  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.182280  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.372711  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.873233  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.446673  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.447301  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:45.188927  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.682373  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.371936  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:49.372190  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.447866  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:48.947579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.683659  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.182955  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:54.184535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.373113  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.373239  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.446974  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.447804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.947655  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:56.682535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.683610  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.872286  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:57.872418  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:59.872720  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.447354  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:00.447456  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:00.684164  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:03.182802  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.873903  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.372767  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:02.947088  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.948196  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:05.683195  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.184305  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:06.872641  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.872811  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.447814  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:09.948377  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:10.186012  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.682829  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:11.374597  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:13.872241  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.447966  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.448396  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.683559  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.185276  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.372851  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:18.872479  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.947677  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:19.449701  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:19.682652  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.683209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.684681  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.371629  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.373290  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.947319  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:24.446685  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.183070  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.185526  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:25.872866  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.371245  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.371878  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.447559  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.948355  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.949170  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:30.683714  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.182628  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.373119  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:34.872336  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.447847  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:35.947756  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:35.183021  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.682254  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.371644  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:39.372270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.447549  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:40.946928  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.182928  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.873545  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.372751  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.947699  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.949106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:44.684067  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.182248  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.183397  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:46.871983  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:48.872101  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.447284  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.448275  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.949171  585014 pod_ready.go:82] duration metric: took 4m0.008012606s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:11:50.949202  585014 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:11:50.949213  585014 pod_ready.go:39] duration metric: took 4m6.974004451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:11:50.949249  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:11:50.949290  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.949351  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.998560  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:50.998584  585014 cri.go:89] found id: ""
	I1008 19:11:50.998591  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:11:50.998649  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.003407  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:51.003490  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:51.183611  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:53.683418  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.872692  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:52.873320  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:55.372722  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:51.049315  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:51.049343  585014 cri.go:89] found id: ""
	I1008 19:11:51.049353  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:11:51.049411  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.055212  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:51.055281  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:51.101271  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.101292  585014 cri.go:89] found id: ""
	I1008 19:11:51.101300  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:11:51.101360  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.105902  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:51.105966  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:51.150355  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.150390  585014 cri.go:89] found id: ""
	I1008 19:11:51.150402  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:11:51.150468  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.155116  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:51.155193  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:51.197754  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:51.197779  585014 cri.go:89] found id: ""
	I1008 19:11:51.197790  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:11:51.197846  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.201957  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:51.202023  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:51.239982  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:51.240001  585014 cri.go:89] found id: ""
	I1008 19:11:51.240009  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:11:51.240064  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.244580  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:51.244645  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:51.280099  585014 cri.go:89] found id: ""
	I1008 19:11:51.280126  585014 logs.go:282] 0 containers: []
	W1008 19:11:51.280137  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:51.280144  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:11:51.280205  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:11:51.323467  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:51.323508  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:51.323514  585014 cri.go:89] found id: ""
	I1008 19:11:51.323525  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:11:51.323676  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.328091  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.332113  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:51.332139  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:11:51.455430  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:11:51.455463  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.492792  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:11:51.492824  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.533732  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.533768  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:52.085919  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:11:52.085972  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:52.120874  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:52.120912  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:11:52.163961  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164188  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.164330  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164489  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.195681  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:52.195716  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:52.210569  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:11:52.210601  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:52.256667  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:11:52.256700  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:52.303627  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:11:52.303685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:52.340250  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:11:52.340279  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:52.402179  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:11:52.402213  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:52.440288  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:11:52.440326  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:52.478952  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.478979  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:11:52.479043  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:11:52.479060  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479068  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.479077  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479084  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.479092  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.479101  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.182942  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:58.184307  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:57.373902  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:59.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:00.683230  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:03.184294  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.372439  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:04.871327  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.480085  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.498401  585014 api_server.go:72] duration metric: took 4m26.226421652s to wait for apiserver process to appear ...
	I1008 19:12:02.498433  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:12:02.498479  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.498544  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:02.533531  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:02.533563  585014 cri.go:89] found id: ""
	I1008 19:12:02.533575  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:02.533643  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.537914  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:02.537985  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:02.579011  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:02.579039  585014 cri.go:89] found id: ""
	I1008 19:12:02.579049  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:02.579111  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.583628  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:02.583695  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:02.625038  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.625065  585014 cri.go:89] found id: ""
	I1008 19:12:02.625075  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:02.625138  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.629262  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:02.629331  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:02.662964  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:02.662988  585014 cri.go:89] found id: ""
	I1008 19:12:02.662997  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:02.663052  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.666955  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:02.667013  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:02.704552  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:02.704578  585014 cri.go:89] found id: ""
	I1008 19:12:02.704589  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:02.704640  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.708910  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:02.708962  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:02.743196  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.743220  585014 cri.go:89] found id: ""
	I1008 19:12:02.743229  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:02.743276  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.747488  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:02.747563  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:02.789367  585014 cri.go:89] found id: ""
	I1008 19:12:02.789405  585014 logs.go:282] 0 containers: []
	W1008 19:12:02.789418  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:02.789426  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:02.789479  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:02.828607  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:02.828640  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.828646  585014 cri.go:89] found id: ""
	I1008 19:12:02.828656  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:02.828723  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.832981  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.837258  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:02.837284  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.874214  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:02.874249  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.925844  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:02.925879  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.963715  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:02.963744  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.009069  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.009102  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:03.046628  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.046816  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.046947  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.047129  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.080027  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.080068  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:03.203192  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:03.203233  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:03.254645  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:03.254681  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:03.300881  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:03.300918  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:03.347403  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.347440  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.802754  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.802801  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.816658  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:03.816695  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:03.873630  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:03.873670  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:03.910834  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.910862  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:03.910932  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:03.910946  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910955  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.910972  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910983  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.910994  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.911006  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:05.682916  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:07.683273  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:06.872011  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:08.873682  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:09.684055  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.182786  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:14.186868  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:10.866893  585096 pod_ready.go:82] duration metric: took 4m0.000956825s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:10.866937  585096 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1008 19:12:10.866961  585096 pod_ready.go:39] duration metric: took 4m15.184400794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:10.866992  585096 kubeadm.go:597] duration metric: took 4m23.829186185s to restartPrimaryControlPlane
	W1008 19:12:10.867049  585096 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:10.867092  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:13.911944  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:12:13.917530  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:12:13.918513  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:12:13.918537  585014 api_server.go:131] duration metric: took 11.420096691s to wait for apiserver health ...
	I1008 19:12:13.918546  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:12:13.918570  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:13.918621  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:13.957026  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:13.957048  585014 cri.go:89] found id: ""
	I1008 19:12:13.957057  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:13.957114  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:13.961553  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:13.961611  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:13.996466  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:13.996497  585014 cri.go:89] found id: ""
	I1008 19:12:13.996508  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:13.996570  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.000972  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:14.001036  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:14.034888  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.034917  585014 cri.go:89] found id: ""
	I1008 19:12:14.034929  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:14.034989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.039145  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:14.039216  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:14.074109  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:14.074134  585014 cri.go:89] found id: ""
	I1008 19:12:14.074145  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:14.074202  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.078291  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:14.078371  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:14.113375  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:14.113403  585014 cri.go:89] found id: ""
	I1008 19:12:14.113413  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:14.113475  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.117909  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:14.118002  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:14.153800  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:14.153823  585014 cri.go:89] found id: ""
	I1008 19:12:14.153833  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:14.153898  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.158233  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:14.158302  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:14.195093  585014 cri.go:89] found id: ""
	I1008 19:12:14.195123  585014 logs.go:282] 0 containers: []
	W1008 19:12:14.195133  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:14.195142  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:14.195203  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:14.230894  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:14.230917  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:14.230921  585014 cri.go:89] found id: ""
	I1008 19:12:14.230929  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:14.230989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.236299  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.240914  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:14.240940  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:14.282289  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282488  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:14.282643  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282824  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:14.315207  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:14.315235  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:14.433616  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:14.433647  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:14.482640  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:14.482685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.524749  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:14.524788  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:14.979562  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:14.979629  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:15.016898  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:15.016941  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:15.058447  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:15.058478  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:15.114345  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:15.114384  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:15.128920  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:15.128948  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:15.176775  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:15.176817  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:15.215091  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:15.215129  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:15.256687  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:15.256731  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:15.311551  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311583  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:15.311641  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:15.311653  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311664  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:15.311676  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311681  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:15.311687  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311695  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:16.682464  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:19.182595  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:21.184074  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:23.682867  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:25.319305  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:12:25.319336  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.319340  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.319344  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.319348  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.319351  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.319354  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.319362  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.319365  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.319371  585014 system_pods.go:74] duration metric: took 11.400819931s to wait for pod list to return data ...
	I1008 19:12:25.319378  585014 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:12:25.322115  585014 default_sa.go:45] found service account: "default"
	I1008 19:12:25.322135  585014 default_sa.go:55] duration metric: took 2.751457ms for default service account to be created ...
	I1008 19:12:25.322143  585014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:12:25.326570  585014 system_pods.go:86] 8 kube-system pods found
	I1008 19:12:25.326590  585014 system_pods.go:89] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.326595  585014 system_pods.go:89] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.326599  585014 system_pods.go:89] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.326604  585014 system_pods.go:89] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.326610  585014 system_pods.go:89] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.326615  585014 system_pods.go:89] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.326625  585014 system_pods.go:89] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.326633  585014 system_pods.go:89] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.326642  585014 system_pods.go:126] duration metric: took 4.494323ms to wait for k8s-apps to be running ...
	I1008 19:12:25.326651  585014 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:12:25.326701  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:25.344597  585014 system_svc.go:56] duration metric: took 17.941012ms WaitForService to wait for kubelet
	I1008 19:12:25.344621  585014 kubeadm.go:582] duration metric: took 4m49.072648847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:12:25.344638  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:12:25.347385  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:12:25.347404  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:12:25.347425  585014 node_conditions.go:105] duration metric: took 2.783181ms to run NodePressure ...
	I1008 19:12:25.347437  585014 start.go:241] waiting for startup goroutines ...
	I1008 19:12:25.347450  585014 start.go:246] waiting for cluster config update ...
	I1008 19:12:25.347463  585014 start.go:255] writing updated cluster config ...
	I1008 19:12:25.347823  585014 ssh_runner.go:195] Run: rm -f paused
	I1008 19:12:25.395903  585014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:12:25.397911  585014 out.go:177] * Done! kubectl is now configured to use "embed-certs-783146" cluster and "default" namespace by default
	I1008 19:12:25.683645  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:28.182995  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:30.183567  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:32.682881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.013046  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.145916528s)
	I1008 19:12:37.013156  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:37.028010  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:37.037493  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:37.046435  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:37.046455  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:37.046495  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:12:37.055422  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:37.055482  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:37.064538  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:12:37.072968  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:37.073021  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:37.081754  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.090143  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:37.090179  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.098726  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:12:37.107261  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:37.107308  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:37.115975  585096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:37.163570  585096 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:12:37.163642  585096 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:37.272891  585096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:37.273025  585096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:37.273151  585096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:12:37.284204  585096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:37.286084  585096 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:37.286175  585096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:37.286263  585096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:37.286385  585096 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:37.286443  585096 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:37.286545  585096 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:37.286638  585096 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:37.286729  585096 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:37.286812  585096 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:37.286912  585096 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:37.287010  585096 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:37.287082  585096 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:37.287172  585096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:37.602946  585096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:37.727897  585096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:12:37.932126  585096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:37.989742  585096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:38.036655  585096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:38.037085  585096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:38.040618  585096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:35.182881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.683718  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:38.042238  585096 out.go:235]   - Booting up control plane ...
	I1008 19:12:38.042374  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:38.042568  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:38.043504  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:38.065666  585096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:38.071727  585096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:38.071814  585096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:38.210382  585096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:12:38.210516  585096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:12:39.213697  585096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003319891s
	I1008 19:12:39.213803  585096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:12:43.717718  585096 kubeadm.go:310] [api-check] The API server is healthy after 4.502167036s
	I1008 19:12:43.728628  585096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:12:43.744283  585096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:12:43.775369  585096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:12:43.775621  585096 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-142496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:12:43.788583  585096 kubeadm.go:310] [bootstrap-token] Using token: srsq4v.7le212xun40ljc7w
	I1008 19:12:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:42.183680  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:44.185065  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:43.789834  585096 out.go:235]   - Configuring RBAC rules ...
	I1008 19:12:43.789945  585096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:12:43.796091  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:12:43.807906  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:12:43.811025  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:12:43.814445  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:12:43.817615  585096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:12:44.122839  585096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:12:44.567387  585096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:12:45.122714  585096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:12:45.123480  585096 kubeadm.go:310] 
	I1008 19:12:45.123590  585096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:12:45.123617  585096 kubeadm.go:310] 
	I1008 19:12:45.123740  585096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:12:45.123749  585096 kubeadm.go:310] 
	I1008 19:12:45.123789  585096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:12:45.123870  585096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:12:45.123958  585096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:12:45.123984  585096 kubeadm.go:310] 
	I1008 19:12:45.124064  585096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:12:45.124080  585096 kubeadm.go:310] 
	I1008 19:12:45.124152  585096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:12:45.124162  585096 kubeadm.go:310] 
	I1008 19:12:45.124248  585096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:12:45.124366  585096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:12:45.124456  585096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:12:45.124469  585096 kubeadm.go:310] 
	I1008 19:12:45.124579  585096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:12:45.124682  585096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:12:45.124692  585096 kubeadm.go:310] 
	I1008 19:12:45.124804  585096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.124926  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:12:45.124953  585096 kubeadm.go:310] 	--control-plane 
	I1008 19:12:45.124958  585096 kubeadm.go:310] 
	I1008 19:12:45.125086  585096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:12:45.125093  585096 kubeadm.go:310] 
	I1008 19:12:45.125182  585096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.125321  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:12:45.126852  585096 kubeadm.go:310] W1008 19:12:37.105673    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127231  585096 kubeadm.go:310] W1008 19:12:37.106373    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127380  585096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:12:45.127429  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:12:45.127452  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:12:45.129742  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:12:45.130870  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:12:45.143909  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:12:45.170901  585096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:12:45.170965  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:45.170972  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-142496 minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=default-k8s-diff-port-142496 minikube.k8s.io/primary=true
	I1008 19:12:45.198031  585096 ops.go:34] apiserver oom_adj: -16
	I1008 19:12:45.385789  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.684251  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:49.183225  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:45.886434  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.386165  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.886920  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.386786  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.885835  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.386706  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.885981  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.386856  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.471554  585096 kubeadm.go:1113] duration metric: took 4.300656747s to wait for elevateKubeSystemPrivileges
	I1008 19:12:49.471596  585096 kubeadm.go:394] duration metric: took 5m2.486064826s to StartCluster
	I1008 19:12:49.471627  585096 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.471736  585096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:12:49.473381  585096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.473676  585096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:12:49.473768  585096 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:12:49.473874  585096 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473897  585096 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142496"
	I1008 19:12:49.473899  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:12:49.473904  585096 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473923  585096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142496"
	W1008 19:12:49.473907  585096 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:12:49.473939  585096 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473955  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.473967  585096 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.473981  585096 addons.go:243] addon metrics-server should already be in state true
	I1008 19:12:49.474022  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.474283  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474313  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474338  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474366  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474373  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474405  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.475217  585096 out.go:177] * Verifying Kubernetes components...
	I1008 19:12:49.476402  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:12:49.490880  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1008 19:12:49.491405  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.492070  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.492093  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.492454  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.492990  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.493040  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.493623  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 19:12:49.493646  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1008 19:12:49.494011  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494067  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494548  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494565  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494763  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494790  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494930  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495102  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.495276  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495871  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.495908  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.498744  585096 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.498764  585096 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:12:49.498787  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.499142  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.499173  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.514047  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1008 19:12:49.514527  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.515028  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.515046  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.515493  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.515662  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.516519  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1008 19:12:49.517015  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.517643  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.517661  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.517706  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.517757  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I1008 19:12:49.518133  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.518458  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.518617  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.518643  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.518681  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.519107  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.519527  585096 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:12:49.519808  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.519923  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.520415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.520624  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:12:49.520644  585096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:12:49.520669  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.522226  585096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:12:49.523372  585096 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.523396  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:12:49.523415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.523947  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524437  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.524464  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.524830  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.525042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.525198  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.527349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.527693  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527842  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.528009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.528186  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.528325  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.536509  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I1008 19:12:49.536879  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.537341  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.537359  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.537606  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.537897  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.539570  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.539810  585096 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.539831  585096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:12:49.539848  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.542955  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.543522  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543543  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.543726  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.543888  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.544023  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.721845  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:12:49.741622  585096 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.763968  585096 node_ready.go:49] node "default-k8s-diff-port-142496" has status "Ready":"True"
	I1008 19:12:49.764005  585096 node_ready.go:38] duration metric: took 22.348135ms for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.764019  585096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:49.793150  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:49.867565  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.904041  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.912694  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:12:49.912723  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:12:49.962053  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:12:49.962082  585096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:12:50.004678  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.004709  585096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:12:50.068528  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.394807  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394824  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394836  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.394841  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395140  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395161  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395172  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395181  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395181  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395195  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395201  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.395205  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395262  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395425  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395439  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395616  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395668  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395643  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416509  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.416532  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.416815  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416865  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.416880  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634404  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634428  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634722  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.634744  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634752  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634761  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634769  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635036  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635066  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.635079  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.635100  585096 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-142496"
	I1008 19:12:50.636555  585096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:12:51.683959  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.182376  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:50.637816  585096 addons.go:510] duration metric: took 1.164063633s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:12:51.799881  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.299619  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:56.183179  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683102  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683159  584371 pod_ready.go:82] duration metric: took 4m0.006623922s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:58.683173  584371 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:12:58.683184  584371 pod_ready.go:39] duration metric: took 4m4.541923995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:58.683207  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:12:58.683245  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:58.683296  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:58.729385  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:58.729407  584371 cri.go:89] found id: ""
	I1008 19:12:58.729417  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:12:58.729472  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.734291  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:58.734382  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:58.772015  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:12:58.772050  584371 cri.go:89] found id: ""
	I1008 19:12:58.772062  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:12:58.772123  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.776231  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:58.776300  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:58.812962  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:58.812982  584371 cri.go:89] found id: ""
	I1008 19:12:58.812991  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:12:58.813046  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.816951  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:58.817002  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:58.852918  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:58.852939  584371 cri.go:89] found id: ""
	I1008 19:12:58.852946  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:12:58.852992  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.857184  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:58.857245  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:58.895233  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:12:58.895254  584371 cri.go:89] found id: ""
	I1008 19:12:58.895264  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:12:58.895317  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.899301  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:58.899354  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:58.933918  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:58.933946  584371 cri.go:89] found id: ""
	I1008 19:12:58.933956  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:12:58.934003  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.938274  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:58.938361  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:58.980067  584371 cri.go:89] found id: ""
	I1008 19:12:58.980094  584371 logs.go:282] 0 containers: []
	W1008 19:12:58.980104  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:58.980113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:58.980174  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:59.013783  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:12:59.013812  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.013817  584371 cri.go:89] found id: ""
	I1008 19:12:59.013827  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:12:59.013886  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.018420  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.024462  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:12:59.024486  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.062654  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:12:59.062688  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:59.110932  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:59.110966  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:59.248699  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:12:59.248734  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:59.294439  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:12:59.294473  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:59.331208  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:12:59.331241  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:59.374242  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:12:59.374283  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:56.799487  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.800290  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:59.800320  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.800349  585096 pod_ready.go:82] duration metric: took 10.007162242s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.800361  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804590  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.804609  585096 pod_ready.go:82] duration metric: took 4.240474ms for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804620  585096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808737  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.808754  585096 pod_ready.go:82] duration metric: took 4.127686ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808762  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813126  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.813146  585096 pod_ready.go:82] duration metric: took 4.37796ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813154  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817020  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.817039  585096 pod_ready.go:82] duration metric: took 3.878053ms for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817048  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197958  585096 pod_ready.go:93] pod "kube-proxy-wd5kv" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.197983  585096 pod_ready.go:82] duration metric: took 380.928087ms for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197992  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597495  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.597521  585096 pod_ready.go:82] duration metric: took 399.522182ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597529  585096 pod_ready.go:39] duration metric: took 10.833495765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:13:00.597545  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:13:00.597612  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:00.613266  585096 api_server.go:72] duration metric: took 11.139554705s to wait for apiserver process to appear ...
	I1008 19:13:00.613289  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:00.613308  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:13:00.618420  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:13:00.619376  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:00.619399  585096 api_server.go:131] duration metric: took 6.102941ms to wait for apiserver health ...
	I1008 19:13:00.619407  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:00.800687  585096 system_pods.go:59] 9 kube-system pods found
	I1008 19:13:00.800720  585096 system_pods.go:61] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:00.800729  585096 system_pods.go:61] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:00.800733  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:00.800737  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:00.800740  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:00.800743  585096 system_pods.go:61] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:00.800747  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:00.800752  585096 system_pods.go:61] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:00.800755  585096 system_pods.go:61] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:00.800765  585096 system_pods.go:74] duration metric: took 181.352111ms to wait for pod list to return data ...
	I1008 19:13:00.800773  585096 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:00.997631  585096 default_sa.go:45] found service account: "default"
	I1008 19:13:00.997657  585096 default_sa.go:55] duration metric: took 196.876434ms for default service account to be created ...
	I1008 19:13:00.997667  585096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:01.199366  585096 system_pods.go:86] 9 kube-system pods found
	I1008 19:13:01.199396  585096 system_pods.go:89] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:01.199402  585096 system_pods.go:89] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:01.199406  585096 system_pods.go:89] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:01.199409  585096 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:01.199413  585096 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:01.199416  585096 system_pods.go:89] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:01.199419  585096 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:01.199426  585096 system_pods.go:89] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:01.199430  585096 system_pods.go:89] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:01.199439  585096 system_pods.go:126] duration metric: took 201.766214ms to wait for k8s-apps to be running ...
	I1008 19:13:01.199447  585096 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:01.199492  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:01.214863  585096 system_svc.go:56] duration metric: took 15.401989ms WaitForService to wait for kubelet
	I1008 19:13:01.214895  585096 kubeadm.go:582] duration metric: took 11.741185862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:01.214919  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:01.397506  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:01.397530  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:01.397541  585096 node_conditions.go:105] duration metric: took 182.616774ms to run NodePressure ...
	I1008 19:13:01.397553  585096 start.go:241] waiting for startup goroutines ...
	I1008 19:13:01.397560  585096 start.go:246] waiting for cluster config update ...
	I1008 19:13:01.397570  585096 start.go:255] writing updated cluster config ...
	I1008 19:13:01.397828  585096 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:01.448158  585096 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:01.450201  585096 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142496" cluster and "default" namespace by default
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:59.438777  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:59.438814  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:59.945253  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:59.945302  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:00.016570  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:00.016607  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:00.034150  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:00.034183  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:00.075423  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:00.075456  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:00.111132  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:00.111164  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.646570  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:02.666594  584371 api_server.go:72] duration metric: took 4m13.762192057s to wait for apiserver process to appear ...
	I1008 19:13:02.666620  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:02.666663  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:02.666718  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:02.704214  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:02.704242  584371 cri.go:89] found id: ""
	I1008 19:13:02.704250  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:02.704298  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.708636  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:02.708717  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:02.748418  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:02.748444  584371 cri.go:89] found id: ""
	I1008 19:13:02.748455  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:02.748515  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.753267  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:02.753332  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:02.790534  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:02.790562  584371 cri.go:89] found id: ""
	I1008 19:13:02.790571  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:02.790636  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.794880  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:02.794950  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:02.834754  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:02.834774  584371 cri.go:89] found id: ""
	I1008 19:13:02.834781  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:02.834830  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.839391  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:02.839463  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:02.878344  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:02.878371  584371 cri.go:89] found id: ""
	I1008 19:13:02.878380  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:02.878425  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.882939  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:02.883025  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:02.920081  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:02.920104  584371 cri.go:89] found id: ""
	I1008 19:13:02.920112  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:02.920168  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.924141  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:02.924205  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:02.959700  584371 cri.go:89] found id: ""
	I1008 19:13:02.959730  584371 logs.go:282] 0 containers: []
	W1008 19:13:02.959741  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:02.959750  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:02.959822  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:02.996900  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.996927  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:02.996933  584371 cri.go:89] found id: ""
	I1008 19:13:02.996940  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:02.996989  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.001152  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.005021  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:03.005046  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:03.069775  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:03.069813  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:03.120028  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:03.120060  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:03.155756  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:03.155784  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:03.195587  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:03.195624  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:03.231844  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:03.231875  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:03.271156  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:03.271187  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:03.286994  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:03.287017  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:03.397237  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:03.397269  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:03.442373  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:03.442407  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:03.500191  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:03.500222  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:03.535448  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:03.535490  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:03.966382  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:03.966425  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:06.513885  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:13:06.518111  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:13:06.519310  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:06.519331  584371 api_server.go:131] duration metric: took 3.852704338s to wait for apiserver health ...
	I1008 19:13:06.519341  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:06.519370  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:06.519417  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:06.558940  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:06.558965  584371 cri.go:89] found id: ""
	I1008 19:13:06.558979  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:06.559029  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.563471  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:06.563537  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:06.607844  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:06.607873  584371 cri.go:89] found id: ""
	I1008 19:13:06.607883  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:06.607944  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.612399  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:06.612456  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:06.645502  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:06.645521  584371 cri.go:89] found id: ""
	I1008 19:13:06.645528  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:06.645575  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.649442  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:06.649519  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:06.685085  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:06.685114  584371 cri.go:89] found id: ""
	I1008 19:13:06.685126  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:06.685183  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.689859  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:06.689935  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:06.724775  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:06.724803  584371 cri.go:89] found id: ""
	I1008 19:13:06.724814  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:06.724873  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.729489  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:06.729542  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:06.776599  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:06.776626  584371 cri.go:89] found id: ""
	I1008 19:13:06.776636  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:06.776704  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.780790  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:06.780863  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:06.817072  584371 cri.go:89] found id: ""
	I1008 19:13:06.817097  584371 logs.go:282] 0 containers: []
	W1008 19:13:06.817106  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:06.817113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:06.817171  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:06.855429  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:06.855453  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:06.855457  584371 cri.go:89] found id: ""
	I1008 19:13:06.855465  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:06.855520  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.859774  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.863800  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:06.863821  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:06.931413  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:06.931443  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:06.946213  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:06.946236  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:07.070604  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:07.070640  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:07.114749  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:07.114782  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:07.152555  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:07.152584  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:07.192730  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:07.192759  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:07.242001  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:07.242036  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:07.612662  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:07.612714  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:07.656655  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:07.656700  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:07.695462  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:07.695494  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:07.733107  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:07.733143  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:07.779348  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:07.779382  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:10.325584  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:13:10.325616  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.325620  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.325624  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.325628  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.325631  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.325634  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.325639  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.325644  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.325651  584371 system_pods.go:74] duration metric: took 3.806304739s to wait for pod list to return data ...
	I1008 19:13:10.325659  584371 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:10.328062  584371 default_sa.go:45] found service account: "default"
	I1008 19:13:10.328082  584371 default_sa.go:55] duration metric: took 2.41797ms for default service account to be created ...
	I1008 19:13:10.328089  584371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:10.332201  584371 system_pods.go:86] 8 kube-system pods found
	I1008 19:13:10.332224  584371 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.332229  584371 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.332233  584371 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.332237  584371 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.332241  584371 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.332245  584371 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.332250  584371 system_pods.go:89] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.332254  584371 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.332261  584371 system_pods.go:126] duration metric: took 4.167739ms to wait for k8s-apps to be running ...
	I1008 19:13:10.332270  584371 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:10.332313  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:10.350257  584371 system_svc.go:56] duration metric: took 17.979349ms WaitForService to wait for kubelet
	I1008 19:13:10.350288  584371 kubeadm.go:582] duration metric: took 4m21.445892386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:10.350310  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:10.352582  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:10.352598  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:10.352609  584371 node_conditions.go:105] duration metric: took 2.294326ms to run NodePressure ...
	I1008 19:13:10.352620  584371 start.go:241] waiting for startup goroutines ...
	I1008 19:13:10.352626  584371 start.go:246] waiting for cluster config update ...
	I1008 19:13:10.352636  584371 start.go:255] writing updated cluster config ...
	I1008 19:13:10.352882  584371 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:10.401998  584371 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:10.404037  584371 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 
	
	
	==> CRI-O <==
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.421348944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09743cbe-6137-4960-9382-81e36289d94a name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.422544178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a5fffc4-54c9-4fa0-980d-b22c13ca5705 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.422929993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415323422907535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a5fffc4-54c9-4fa0-980d-b22c13ca5705 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.423617967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71817d35-7d30-4695-be54-2b5ea07c093b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.423736089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71817d35-7d30-4695-be54-2b5ea07c093b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.423931698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71817d35-7d30-4695-be54-2b5ea07c093b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.464857864Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7fd462c0-2217-452d-9b14-a53814946c22 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.465331519Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-x4j67,Uid:89141081-eb1e-466a-913d-597e8df02125,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414771508429932,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:12:49.693916172Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrz7s,Uid:e441884e-7c57-4a73-86bb-c46629d2eda6,Namesp
ace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414771507555160,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:12:49.700827284Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&PodSandboxMetadata{Name:kube-proxy-wd5kv,Uid:714118a5-ec5d-448c-ad63-7f0303d00eb0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414771318252519,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,k8s-app: kube-proxy,pod-template-generation: 1,},Ann
otations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:12:49.507422728Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c3c57b3f-59d9-49bb-ba82-caee6af45bde,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414771256533413,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"con
tainers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-08T19:12:50.348200085Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c3b5e39326c59b94d3d42e9d38b8b36fa1fdb22c4e5a9c5a8de8bb88130829b,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-wvh5g,Uid:99dacec0-80f9-4662-bbea-6191aa9b62d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414771102939007,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-wvh5g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99dacec0-80f9-4662-bbea-6191aa9b62d3,k8s-app: metrics-server,
pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-08T19:12:50.496494910Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-142496,Uid:e21240ab4672b709011cc56e9d7153a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414759183353283,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e21240ab4672b709011cc56e9d7153a1,kubernetes.io/config.seen: 2024-10-08T19:12:38.728488585Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&PodSandb
oxMetadata{Name:kube-controller-manager-default-k8s-diff-port-142496,Uid:463572b0fbfb93adebd54796294d940c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414759169530448,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 463572b0fbfb93adebd54796294d940c,kubernetes.io/config.seen: 2024-10-08T19:12:38.728487579Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-142496,Uid:6a1a73ab945a8a2e63f2d0e0a2a3fa9d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728414759160478195,Labels:map[string]string{component: etcd,io.kubernetes.container.name: PO
D,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.213:2379,kubernetes.io/config.hash: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,kubernetes.io/config.seen: 2024-10-08T19:12:38.728481548Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-142496,Uid:e23aa41c5b7e4060e257e9fbf18f818b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728414759156742058,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,tier: control-plane,},Annotations:ma
p[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8444,kubernetes.io/config.hash: e23aa41c5b7e4060e257e9fbf18f818b,kubernetes.io/config.seen: 2024-10-08T19:12:38.728486381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-142496,Uid:e23aa41c5b7e4060e257e9fbf18f818b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728414469103330714,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8444,kubernetes.io/config.hash: e23aa41c5b7e4060e257e9fbf18f818b,kubernetes.io/config.s
een: 2024-10-08T19:07:48.633972953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7fd462c0-2217-452d-9b14-a53814946c22 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.465963388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adb7fff1-175c-496c-a15c-1226f9cec4ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.466015547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adb7fff1-175c-496c-a15c-1226f9cec4ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.466321004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adb7fff1-175c-496c-a15c-1226f9cec4ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.466610174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6a49004-5b93-4d26-b685-9d8de08ee225 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.466794557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6a49004-5b93-4d26-b685-9d8de08ee225 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.469371112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24250376-8351-42df-86a5-cc4c1861b5a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.469734100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415323469716093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24250376-8351-42df-86a5-cc4c1861b5a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.470671661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d995c90-fa0c-4109-8481-7855b5c2b30e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.471182261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d995c90-fa0c-4109-8481-7855b5c2b30e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.471944104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d995c90-fa0c-4109-8481-7855b5c2b30e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.505445603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09045e81-ce99-4d1d-b705-eb86519a2312 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.505515430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09045e81-ce99-4d1d-b705-eb86519a2312 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.507413831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0793c2fe-0b2f-4bc2-ab8e-2a4424436c6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.507788273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415323507769681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0793c2fe-0b2f-4bc2-ab8e-2a4424436c6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.508439952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d8ca2a8-b395-4f4d-8b21-f7d2cefc8c66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.508489860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d8ca2a8-b395-4f4d-8b21-f7d2cefc8c66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:03 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:22:03.508687178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d8ca2a8-b395-4f4d-8b21-f7d2cefc8c66 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4aade92288be       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   aef121dc11aa0       coredns-7c65d6cfc9-x4j67
	f4ffba8b3e548       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8a22a6828aca8       coredns-7c65d6cfc9-wrz7s
	316c2be1cd9b8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   82ad50a49857f       kube-proxy-wd5kv
	1affa0d5c85f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   929cdea5e6572       storage-provisioner
	11ee4a9677fea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   7f7fcc2d3657a       etcd-default-k8s-diff-port-142496
	e28f698409b14       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   44750ffcb7f99       kube-scheduler-default-k8s-diff-port-142496
	143017fc423ec       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   8b411e11c8bca       kube-controller-manager-default-k8s-diff-port-142496
	04efd41bf2d49       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   6c3f46d4c4069       kube-apiserver-default-k8s-diff-port-142496
	ab91519f523bd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   22b1938c28189       kube-apiserver-default-k8s-diff-port-142496
	
	
	==> coredns [a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-142496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-142496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=default-k8s-diff-port-142496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 19:12:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-142496
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 19:21:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 19:18:00 +0000   Tue, 08 Oct 2024 19:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 19:18:00 +0000   Tue, 08 Oct 2024 19:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 19:18:00 +0000   Tue, 08 Oct 2024 19:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 19:18:00 +0000   Tue, 08 Oct 2024 19:12:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.213
	  Hostname:    default-k8s-diff-port-142496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e918f3f98174aa5aaa05fc0956fcda2
	  System UUID:                8e918f3f-9817-4aa5-aaa0-5fc0956fcda2
	  Boot ID:                    5e0b3d23-4e67-45eb-89f9-edcb3778f372
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wrz7s                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-x4j67                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-default-k8s-diff-port-142496                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-142496             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-142496    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-wd5kv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-142496             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-wvh5g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node default-k8s-diff-port-142496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node default-k8s-diff-port-142496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node default-k8s-diff-port-142496 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s  node-controller  Node default-k8s-diff-port-142496 event: Registered Node default-k8s-diff-port-142496 in Controller
	
	
	==> dmesg <==
	[  +0.052034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045144] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.968150] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471282] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.569606] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.751066] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.056308] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082075] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.203244] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.168564] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.296220] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +4.177346] systemd-fstab-generator[784]: Ignoring "noauto" option for root device
	[  +2.033266] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.062972] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.557632] kauditd_printk_skb: 69 callbacks suppressed
	[Oct 8 19:08] kauditd_printk_skb: 87 callbacks suppressed
	[Oct 8 19:12] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.305003] systemd-fstab-generator[2556]: Ignoring "noauto" option for root device
	[  +4.709444] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.352522] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +5.400838] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[  +0.115383] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.558924] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1] <==
	{"level":"info","ts":"2024-10-08T19:12:39.857763Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-08T19:12:39.845430Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.213:2380"}
	{"level":"info","ts":"2024-10-08T19:12:39.857869Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.213:2380"}
	{"level":"info","ts":"2024-10-08T19:12:39.857338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 switched to configuration voters=(12669501187770177636)"}
	{"level":"info","ts":"2024-10-08T19:12:39.858011Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","added-peer-id":"afd31c34526e5864","added-peer-peer-urls":["https://192.168.50.213:2380"]}
	{"level":"info","ts":"2024-10-08T19:12:40.394148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-08T19:12:40.394200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-08T19:12:40.394228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgPreVoteResp from afd31c34526e5864 at term 1"}
	{"level":"info","ts":"2024-10-08T19:12:40.394247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became candidate at term 2"}
	{"level":"info","ts":"2024-10-08T19:12:40.394269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgVoteResp from afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-10-08T19:12:40.394280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became leader at term 2"}
	{"level":"info","ts":"2024-10-08T19:12:40.394288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: afd31c34526e5864 elected leader afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-10-08T19:12:40.398305Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"afd31c34526e5864","local-member-attributes":"{Name:default-k8s-diff-port-142496 ClientURLs:[https://192.168.50.213:2379]}","request-path":"/0/members/afd31c34526e5864/attributes","cluster-id":"64fdbb8e23141dc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T19:12:40.398395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T19:12:40.398837Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:12:40.400086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T19:12:40.400108Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T19:12:40.403415Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T19:12:40.400698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:12:40.405126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:12:40.405233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:12:40.405278Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:12:40.405462Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.213:2379"}
	{"level":"info","ts":"2024-10-08T19:12:40.405805Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:12:40.406590Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:22:03 up 14 min,  0 users,  load average: 0.02, 0.17, 0.17
	Linux default-k8s-diff-port-142496 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00] <==
	E1008 19:17:42.785623       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1008 19:17:42.785693       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:17:42.786760       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:17:42.786863       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:18:42.787662       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:18:42.787739       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:18:42.787871       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:18:42.787927       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:18:42.788874       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:18:42.790033       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:20:42.789032       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:20:42.789417       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:20:42.790371       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:20:42.790473       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:20:42.790603       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:20:42.791545       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370] <==
	W1008 19:12:31.127431       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:31.194449       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:31.230299       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:31.388938       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:34.931301       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:34.982564       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.227301       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.592387       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.905015       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.932721       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.988741       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.020618       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.033261       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.074618       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.160589       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.312726       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.380883       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.431036       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.432408       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.436841       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.482118       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.529358       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.529466       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.594822       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.627438       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711] <==
	E1008 19:16:48.807324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:16:49.249437       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:17:18.814747       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:17:19.262046       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:17:48.820738       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:17:49.269921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:18:00.196349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-142496"
	E1008 19:18:18.826889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:18:19.277961       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:18:43.407211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="354.092µs"
	E1008 19:18:48.835102       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:18:49.288031       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:18:57.404133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="222.126µs"
	E1008 19:19:18.841656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:19:19.300005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:19:48.848917       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:19:49.306882       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:20:18.855217       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:20:19.315119       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:20:48.861840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:20:49.323392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:21:18.868282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:21:19.346577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:21:48.874109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:21:49.355390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 19:12:51.851174       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 19:12:51.890000       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.213"]
	E1008 19:12:51.890323       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 19:12:52.017439       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 19:12:52.017495       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 19:12:52.017517       1 server_linux.go:169] "Using iptables Proxier"
	I1008 19:12:52.039117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 19:12:52.039388       1 server.go:483] "Version info" version="v1.31.1"
	I1008 19:12:52.039418       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:12:52.049874       1 config.go:199] "Starting service config controller"
	I1008 19:12:52.049915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 19:12:52.049938       1 config.go:105] "Starting endpoint slice config controller"
	I1008 19:12:52.049942       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 19:12:52.052179       1 config.go:328] "Starting node config controller"
	I1008 19:12:52.052206       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 19:12:52.150915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 19:12:52.150975       1 shared_informer.go:320] Caches are synced for service config
	I1008 19:12:52.152328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414] <==
	W1008 19:12:42.651601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 19:12:42.651771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.661163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1008 19:12:42.661241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.688906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 19:12:42.688937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.714485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 19:12:42.714533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.845514       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 19:12:42.845570       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1008 19:12:42.873975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 19:12:42.874042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.880483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 19:12:42.880540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.910132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 19:12:42.910182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.012736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 19:12:43.012788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.022574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 19:12:43.022619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.074929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 19:12:43.074984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.133237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 19:12:43.133290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1008 19:12:46.015128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 19:20:53 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:20:53.390141    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:20:54 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:20:54.572708    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415254572328326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:20:54 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:20:54.572747    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415254572328326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:04 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:04.574467    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415264574144123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:04 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:04.574505    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415264574144123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:05 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:05.390195    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:21:14 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:14.575503    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415274575292739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:14 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:14.575525    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415274575292739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:20 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:20.389510    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:21:24 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:24.577817    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415284577258752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:24 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:24.577858    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415284577258752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:34 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:34.392157    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:21:34 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:34.579857    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415294579563964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:34 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:34.579905    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415294579563964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:44.415030    2881 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:44.582619    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415304581842889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:44.582679    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415304581842889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:46 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:46.390979    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:21:54 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:54.584437    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415314584221338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:54 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:54.584479    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415314584221338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:58 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:21:58.390571    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	
	
	==> storage-provisioner [1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938] <==
	I1008 19:12:51.528840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 19:12:51.566358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 19:12:51.566436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 19:12:51.585541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 19:12:51.585680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142496_03500725-b624-4c94-9168-9eb5a541bcc4!
	I1008 19:12:51.594144       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1108c641-0a60-4a0b-a727-c64300ada9de", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-142496_03500725-b624-4c94-9168-9eb5a541bcc4 became leader
	I1008 19:12:51.686513       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142496_03500725-b624-4c94-9168-9eb5a541bcc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-wvh5g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 describe pod metrics-server-6867b74b74-wvh5g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-142496 describe pod metrics-server-6867b74b74-wvh5g: exit status 1 (82.877543ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-wvh5g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-142496 describe pod metrics-server-6867b74b74-wvh5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1008 19:15:51.764157  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966632 -n no-preload-966632
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-08 19:22:10.931428587 +0000 UTC m=+6524.420580770
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-966632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-966632 logs -n 25: (1.992383272s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-038693 sudo                            | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                 | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:04:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:20.270613  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:23.342576  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:04:29.422638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:32.494606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:38.574600  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:41.646592  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:47.726606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:50.798595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:56.878669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:59.950607  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:06.030583  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:09.102584  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:15.182571  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:18.254590  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:24.334638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:27.406606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:33.486619  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:36.558552  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:42.638565  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:45.710610  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:51.790561  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:54.862591  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:00.942606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:04.014669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:10.094618  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:13.166598  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:19.246573  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:22.318595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:28.398732  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:31.470685  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:37.550574  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:40.622614  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:46.702620  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:49.774581  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:55.854627  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:58.926568  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:07:01.929445  585014 start.go:364] duration metric: took 3m15.782086174s to acquireMachinesLock for "embed-certs-783146"
	I1008 19:07:01.929517  585014 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:01.929523  585014 fix.go:54] fixHost starting: 
	I1008 19:07:01.929889  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:01.929945  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:01.945409  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 19:07:01.945858  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:01.946357  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:01.946387  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:01.946744  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:01.946895  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:01.947028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:01.948399  585014 fix.go:112] recreateIfNeeded on embed-certs-783146: state=Stopped err=<nil>
	I1008 19:07:01.948419  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	W1008 19:07:01.948545  585014 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:01.954020  585014 out.go:177] * Restarting existing kvm2 VM for "embed-certs-783146" ...
	I1008 19:07:01.926825  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:01.926871  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927219  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:07:01.927270  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927475  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:07:01.929278  584371 machine.go:96] duration metric: took 4m37.425232924s to provisionDockerMachine
	I1008 19:07:01.929341  584371 fix.go:56] duration metric: took 4m37.445578307s for fixHost
	I1008 19:07:01.929349  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 4m37.445609603s
	W1008 19:07:01.929369  584371 start.go:714] error starting host: provision: host is not running
	W1008 19:07:01.929510  584371 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1008 19:07:01.929524  584371 start.go:729] Will try again in 5 seconds ...
	I1008 19:07:01.955309  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Start
	I1008 19:07:01.955452  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 19:07:01.956122  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 19:07:01.956432  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 19:07:01.956743  585014 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 19:07:01.957427  585014 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 19:07:03.159229  585014 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 19:07:03.160116  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.160503  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.160565  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.160497  585935 retry.go:31] will retry after 282.873854ms: waiting for machine to come up
	I1008 19:07:03.445297  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.445810  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.445838  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.445740  585935 retry.go:31] will retry after 344.936527ms: waiting for machine to come up
	I1008 19:07:03.792413  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.792802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.792837  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.792741  585935 retry.go:31] will retry after 414.968289ms: waiting for machine to come up
	I1008 19:07:04.209200  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.209532  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.209555  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.209502  585935 retry.go:31] will retry after 403.180416ms: waiting for machine to come up
	I1008 19:07:04.614156  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.614679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.614713  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.614636  585935 retry.go:31] will retry after 631.841511ms: waiting for machine to come up
	I1008 19:07:05.248574  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.248983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.249015  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.248917  585935 retry.go:31] will retry after 639.776909ms: waiting for machine to come up
	I1008 19:07:05.890868  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.891332  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.891406  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.891329  585935 retry.go:31] will retry after 764.489176ms: waiting for machine to come up
	I1008 19:07:06.931497  584371 start.go:360] acquireMachinesLock for no-preload-966632: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:06.657130  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:06.657520  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:06.657550  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:06.657462  585935 retry.go:31] will retry after 1.348973281s: waiting for machine to come up
	I1008 19:07:08.008293  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:08.008779  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:08.008805  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:08.008740  585935 retry.go:31] will retry after 1.146283289s: waiting for machine to come up
	I1008 19:07:09.157106  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:09.157517  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:09.157546  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:09.157493  585935 retry.go:31] will retry after 1.510430686s: waiting for machine to come up
	I1008 19:07:10.669393  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:10.669802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:10.669831  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:10.669749  585935 retry.go:31] will retry after 2.380864418s: waiting for machine to come up
	I1008 19:07:13.053078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:13.053487  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:13.053512  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:13.053427  585935 retry.go:31] will retry after 2.553865951s: waiting for machine to come up
	I1008 19:07:15.610098  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:15.610501  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:15.610535  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:15.610428  585935 retry.go:31] will retry after 4.018444789s: waiting for machine to come up
	I1008 19:07:20.967039  585096 start.go:364] duration metric: took 3m30.476693248s to acquireMachinesLock for "default-k8s-diff-port-142496"
	I1008 19:07:20.967105  585096 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:20.967115  585096 fix.go:54] fixHost starting: 
	I1008 19:07:20.967619  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:20.967675  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:20.984936  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1008 19:07:20.985358  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:20.985869  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:07:20.985896  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:20.986199  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:20.986380  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:20.986520  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:07:20.987828  585096 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142496: state=Stopped err=<nil>
	I1008 19:07:20.987867  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	W1008 19:07:20.988020  585096 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:20.990029  585096 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142496" ...
	I1008 19:07:19.632076  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632468  585014 main.go:141] libmachine: (embed-certs-783146) Found IP for machine: 192.168.72.183
	I1008 19:07:19.632504  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has current primary IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632511  585014 main.go:141] libmachine: (embed-certs-783146) Reserving static IP address...
	I1008 19:07:19.632968  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.633020  585014 main.go:141] libmachine: (embed-certs-783146) DBG | skip adding static IP to network mk-embed-certs-783146 - found existing host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"}
	I1008 19:07:19.633041  585014 main.go:141] libmachine: (embed-certs-783146) Reserved static IP address: 192.168.72.183
	I1008 19:07:19.633062  585014 main.go:141] libmachine: (embed-certs-783146) Waiting for SSH to be available...
	I1008 19:07:19.633073  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Getting to WaitForSSH function...
	I1008 19:07:19.634939  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635221  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.635249  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635415  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH client type: external
	I1008 19:07:19.635453  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa (-rw-------)
	I1008 19:07:19.635496  585014 main.go:141] libmachine: (embed-certs-783146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:19.635509  585014 main.go:141] libmachine: (embed-certs-783146) DBG | About to run SSH command:
	I1008 19:07:19.635522  585014 main.go:141] libmachine: (embed-certs-783146) DBG | exit 0
	I1008 19:07:19.758276  585014 main.go:141] libmachine: (embed-certs-783146) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:19.758658  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 19:07:19.759310  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:19.761990  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.762456  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762803  585014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:07:19.763012  585014 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:19.763034  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:19.763271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.765523  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765829  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.765858  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765988  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.766159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766289  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766433  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.766589  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.766877  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.766891  585014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:19.866272  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:19.866297  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866563  585014 buildroot.go:166] provisioning hostname "embed-certs-783146"
	I1008 19:07:19.866585  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866799  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.869295  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869648  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.869679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869836  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.870017  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870153  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870293  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.870444  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.870621  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.870636  585014 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-783146 && echo "embed-certs-783146" | sudo tee /etc/hostname
	I1008 19:07:19.983892  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-783146
	
	I1008 19:07:19.983925  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.986430  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986776  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.986806  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986922  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.987104  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.987588  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.987746  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.987762  585014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-783146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-783146/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-783146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:20.095178  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:20.095212  585014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:20.095264  585014 buildroot.go:174] setting up certificates
	I1008 19:07:20.095276  585014 provision.go:84] configureAuth start
	I1008 19:07:20.095288  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:20.095578  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.098000  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098431  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.098459  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098591  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.100935  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101241  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.101271  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101393  585014 provision.go:143] copyHostCerts
	I1008 19:07:20.101452  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:20.101463  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:20.101544  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:20.101807  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:20.101824  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:20.101873  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:20.102015  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:20.102029  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:20.102075  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:20.102152  585014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-783146 san=[127.0.0.1 192.168.72.183 embed-certs-783146 localhost minikube]
	I1008 19:07:20.378020  585014 provision.go:177] copyRemoteCerts
	I1008 19:07:20.378093  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:20.378133  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.380678  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381017  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.381050  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381175  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.381386  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.381579  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.381717  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.464627  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:20.487853  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:07:20.510174  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:07:20.532381  585014 provision.go:87] duration metric: took 437.094502ms to configureAuth
	I1008 19:07:20.532405  585014 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:20.532571  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:20.532669  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.535064  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.535382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.535753  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.535920  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.536039  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.536193  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.536406  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.536429  585014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:20.745937  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:20.745967  585014 machine.go:96] duration metric: took 982.940955ms to provisionDockerMachine
	I1008 19:07:20.745980  585014 start.go:293] postStartSetup for "embed-certs-783146" (driver="kvm2")
	I1008 19:07:20.745994  585014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:20.746012  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.746380  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:20.746417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.749056  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749395  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.749425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749566  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.749738  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.749852  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.749943  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.828580  585014 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:20.832894  585014 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:20.832923  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:20.832994  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:20.833069  585014 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:20.833162  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:20.842230  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:20.864957  585014 start.go:296] duration metric: took 118.964089ms for postStartSetup
	I1008 19:07:20.865006  585014 fix.go:56] duration metric: took 18.93548189s for fixHost
	I1008 19:07:20.865029  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.867709  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868089  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.868113  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868223  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.868425  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868583  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868742  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.868926  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.869159  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.869175  585014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:20.966898  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414440.940275348
	
	I1008 19:07:20.966919  585014 fix.go:216] guest clock: 1728414440.940275348
	I1008 19:07:20.966926  585014 fix.go:229] Guest: 2024-10-08 19:07:20.940275348 +0000 UTC Remote: 2024-10-08 19:07:20.865011917 +0000 UTC m=+214.857488447 (delta=75.263431ms)
	I1008 19:07:20.966948  585014 fix.go:200] guest clock delta is within tolerance: 75.263431ms
	I1008 19:07:20.966953  585014 start.go:83] releasing machines lock for "embed-certs-783146", held for 19.037463535s
	I1008 19:07:20.966979  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.967246  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.969983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.970386  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970586  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971061  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971243  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971340  585014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:20.971382  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.971487  585014 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:20.971515  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.974211  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974581  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974632  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.974695  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974872  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.974999  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.975024  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.975028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975184  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975228  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.975374  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975501  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.975559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975709  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:21.072152  585014 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:21.078116  585014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:21.221176  585014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:21.227359  585014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:21.227434  585014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:21.242691  585014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:21.242716  585014 start.go:495] detecting cgroup driver to use...
	I1008 19:07:21.242796  585014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:21.257429  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:21.270208  585014 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:21.270258  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:21.282826  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:21.295827  585014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:21.405804  585014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:21.572147  585014 docker.go:233] disabling docker service ...
	I1008 19:07:21.572231  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:21.586083  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:21.598657  585014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:21.722224  585014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:21.853317  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:21.867234  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:21.884872  585014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:21.884949  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.895154  585014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:21.895223  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.905371  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.915602  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.926026  585014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:21.938089  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.949261  585014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.966211  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.978120  585014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:21.987631  585014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:21.987693  585014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:22.002185  585014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:22.013111  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:22.135933  585014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:22.230256  585014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:22.230342  585014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:22.235005  585014 start.go:563] Will wait 60s for crictl version
	I1008 19:07:22.235076  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:07:22.238991  585014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:22.279302  585014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:22.279391  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.308343  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.337272  585014 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:20.991759  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Start
	I1008 19:07:20.991997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring networks are active...
	I1008 19:07:20.992703  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network default is active
	I1008 19:07:20.993057  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network mk-default-k8s-diff-port-142496 is active
	I1008 19:07:20.993435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Getting domain xml...
	I1008 19:07:20.994209  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Creating domain...
	I1008 19:07:22.240185  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting to get IP...
	I1008 19:07:22.240949  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241417  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241469  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.241382  586083 retry.go:31] will retry after 234.248435ms: waiting for machine to come up
	I1008 19:07:22.476800  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477343  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477375  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.477275  586083 retry.go:31] will retry after 323.851452ms: waiting for machine to come up
	I1008 19:07:22.802997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803574  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803610  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.803516  586083 retry.go:31] will retry after 445.299956ms: waiting for machine to come up
	I1008 19:07:23.250211  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250686  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250715  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.250651  586083 retry.go:31] will retry after 574.786836ms: waiting for machine to come up
	I1008 19:07:23.827535  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828010  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.827959  586083 retry.go:31] will retry after 563.165045ms: waiting for machine to come up
	I1008 19:07:24.393150  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393741  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393792  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.393717  586083 retry.go:31] will retry after 576.443855ms: waiting for machine to come up
	I1008 19:07:24.971698  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972132  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972161  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.972090  586083 retry.go:31] will retry after 999.17904ms: waiting for machine to come up
	I1008 19:07:22.338812  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:22.341998  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:22.342417  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342680  585014 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:22.346863  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:22.359456  585014 kubeadm.go:883] updating cluster {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:22.359630  585014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:22.359692  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:22.394832  585014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:22.394893  585014 ssh_runner.go:195] Run: which lz4
	I1008 19:07:22.398935  585014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:22.403100  585014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:22.403127  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:23.771685  585014 crio.go:462] duration metric: took 1.372780034s to copy over tarball
	I1008 19:07:23.771769  585014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:25.816508  585014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044704362s)
	I1008 19:07:25.816547  585014 crio.go:469] duration metric: took 2.04482777s to extract the tarball
	I1008 19:07:25.816557  585014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:25.852980  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:25.893366  585014 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:25.893391  585014 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:25.893399  585014 kubeadm.go:934] updating node { 192.168.72.183 8443 v1.31.1 crio true true} ...
	I1008 19:07:25.893517  585014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-783146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:25.893579  585014 ssh_runner.go:195] Run: crio config
	I1008 19:07:25.934828  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:25.934850  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:25.934874  585014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:25.934906  585014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.183 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-783146 NodeName:embed-certs-783146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:25.935039  585014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-783146"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:25.935106  585014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:25.944851  585014 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:25.944919  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:25.954022  585014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1008 19:07:25.979675  585014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:26.001147  585014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1008 19:07:26.017613  585014 ssh_runner.go:195] Run: grep 192.168.72.183	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:26.021401  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:26.033347  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:25.972405  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972868  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972891  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:25.972831  586083 retry.go:31] will retry after 1.186801161s: waiting for machine to come up
	I1008 19:07:27.161319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161877  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161900  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:27.161823  586083 retry.go:31] will retry after 1.448383195s: waiting for machine to come up
	I1008 19:07:28.611319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611697  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:28.611613  586083 retry.go:31] will retry after 1.738948191s: waiting for machine to come up
	I1008 19:07:30.352081  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352582  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352617  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:30.352530  586083 retry.go:31] will retry after 2.624799898s: waiting for machine to come up
	I1008 19:07:26.138298  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:26.154419  585014 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146 for IP: 192.168.72.183
	I1008 19:07:26.154447  585014 certs.go:194] generating shared ca certs ...
	I1008 19:07:26.154470  585014 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:26.154651  585014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:26.154714  585014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:26.154729  585014 certs.go:256] generating profile certs ...
	I1008 19:07:26.154860  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/client.key
	I1008 19:07:26.154948  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key.b07aac04
	I1008 19:07:26.155003  585014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key
	I1008 19:07:26.155159  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:26.155202  585014 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:26.155212  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:26.155232  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:26.155256  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:26.155280  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:26.155319  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:26.156076  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:26.187225  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:26.235804  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:26.268034  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:26.292729  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 19:07:26.320118  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:26.351058  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:26.374004  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:07:26.396526  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:26.419067  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:26.441449  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:26.463768  585014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:26.479471  585014 ssh_runner.go:195] Run: openssl version
	I1008 19:07:26.484957  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:26.495286  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501166  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501225  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.507154  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:26.517587  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:26.528157  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532896  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532967  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.540724  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:26.554952  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:26.567160  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571304  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571394  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.576974  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:26.587198  585014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:26.591621  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:26.597176  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:26.602766  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:26.608373  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:26.613797  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:26.619310  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:26.624702  585014 kubeadm.go:392] StartCluster: {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:26.624831  585014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:26.624878  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.666183  585014 cri.go:89] found id: ""
	I1008 19:07:26.666253  585014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:26.676621  585014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:26.676644  585014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:26.676699  585014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:26.686549  585014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:26.687532  585014 kubeconfig.go:125] found "embed-certs-783146" server: "https://192.168.72.183:8443"
	I1008 19:07:26.689545  585014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:26.698758  585014 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.183
	I1008 19:07:26.698790  585014 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:26.698804  585014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:26.698856  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.738148  585014 cri.go:89] found id: ""
	I1008 19:07:26.738209  585014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:26.753980  585014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:26.763186  585014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:26.763208  585014 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:26.763257  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:07:26.771789  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:26.771847  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:26.780812  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:07:26.789329  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:26.789390  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:26.798230  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.806781  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:26.806842  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.815549  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:07:26.823782  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:26.823830  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:26.832698  585014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:26.841687  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:26.945569  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.159232  585014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213619978s)
	I1008 19:07:28.159280  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.372727  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.456082  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.567486  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:28.567627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.067909  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.568466  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.068627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.567821  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.604366  585014 api_server.go:72] duration metric: took 2.036885191s to wait for apiserver process to appear ...
	I1008 19:07:30.604403  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:30.604440  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.461223  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.461270  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.461286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.499425  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.499473  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.604563  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.614594  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:33.614625  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.105286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.111706  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:34.111747  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.605326  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.612912  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:07:34.619204  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:34.619227  585014 api_server.go:131] duration metric: took 4.014816798s to wait for apiserver health ...
	I1008 19:07:34.619236  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:34.619242  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:34.621043  585014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:32.980593  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981141  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981171  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:32.981076  586083 retry.go:31] will retry after 3.401015855s: waiting for machine to come up
	I1008 19:07:34.622500  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:34.632627  585014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:34.654975  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:34.667824  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:34.667853  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:34.667863  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:34.667874  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:34.667879  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:34.667884  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:34.667890  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:34.667899  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:34.667904  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:34.667910  585014 system_pods.go:74] duration metric: took 12.913884ms to wait for pod list to return data ...
	I1008 19:07:34.667919  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:34.672996  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:34.673018  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:34.673029  585014 node_conditions.go:105] duration metric: took 5.105827ms to run NodePressure ...
	I1008 19:07:34.673045  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:34.992309  585014 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996835  585014 kubeadm.go:739] kubelet initialised
	I1008 19:07:34.996861  585014 kubeadm.go:740] duration metric: took 4.524726ms waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996870  585014 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:35.005255  585014 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.012539  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012568  585014 pod_ready.go:82] duration metric: took 7.278613ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.012580  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012589  585014 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.018465  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018489  585014 pod_ready.go:82] duration metric: took 5.8848ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.018500  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018509  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.026503  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026533  585014 pod_ready.go:82] duration metric: took 8.012156ms for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.026544  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026555  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.058419  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058449  585014 pod_ready.go:82] duration metric: took 31.879605ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.058463  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058471  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.458244  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458275  585014 pod_ready.go:82] duration metric: took 399.794285ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.458286  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458292  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.858567  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858612  585014 pod_ready.go:82] duration metric: took 400.312425ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.858625  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858637  585014 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:36.258490  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258520  585014 pod_ready.go:82] duration metric: took 399.870797ms for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:36.258530  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258538  585014 pod_ready.go:39] duration metric: took 1.261659261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:36.258558  585014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:07:36.269993  585014 ops.go:34] apiserver oom_adj: -16
	I1008 19:07:36.270016  585014 kubeadm.go:597] duration metric: took 9.593365367s to restartPrimaryControlPlane
	I1008 19:07:36.270025  585014 kubeadm.go:394] duration metric: took 9.645330227s to StartCluster
	I1008 19:07:36.270044  585014 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.270125  585014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:07:36.271682  585014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.271945  585014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:07:36.272024  585014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:07:36.272130  585014 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-783146"
	I1008 19:07:36.272158  585014 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-783146"
	W1008 19:07:36.272166  585014 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:07:36.272152  585014 addons.go:69] Setting default-storageclass=true in profile "embed-certs-783146"
	I1008 19:07:36.272179  585014 addons.go:69] Setting metrics-server=true in profile "embed-certs-783146"
	I1008 19:07:36.272198  585014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-783146"
	I1008 19:07:36.272203  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272213  585014 addons.go:234] Setting addon metrics-server=true in "embed-certs-783146"
	W1008 19:07:36.272224  585014 addons.go:243] addon metrics-server should already be in state true
	I1008 19:07:36.272256  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272187  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:36.272616  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272638  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272658  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272689  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272694  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272738  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.274263  585014 out.go:177] * Verifying Kubernetes components...
	I1008 19:07:36.275444  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:36.288219  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1008 19:07:36.288686  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.289297  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.289328  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.289721  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.290415  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.290462  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.293043  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1008 19:07:36.293374  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I1008 19:07:36.293461  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293721  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293954  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.293978  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294188  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.294212  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294299  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294504  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.294534  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294982  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.295028  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.297638  585014 addons.go:234] Setting addon default-storageclass=true in "embed-certs-783146"
	W1008 19:07:36.297661  585014 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:07:36.297692  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.298042  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.298081  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.309286  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1008 19:07:36.309776  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310024  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1008 19:07:36.310337  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310360  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.310478  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310771  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.310980  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310997  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.311013  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.311330  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.311500  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.313004  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1008 19:07:36.313159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313368  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.313523  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313926  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.313951  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.314284  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.314777  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.314820  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.314992  585014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:07:36.315010  585014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:07:36.316168  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:07:36.316191  585014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:07:36.316212  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.316309  585014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.316333  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:07:36.316352  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.320088  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320418  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320566  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320591  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320733  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.320888  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320912  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320931  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321074  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.321181  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321235  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321400  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321397  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.321532  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.331532  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1008 19:07:36.331881  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.332309  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.332331  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.332724  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.332929  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.334589  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.334775  585014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.334797  585014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:07:36.334811  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.337675  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.338093  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338209  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.338380  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.338491  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.338600  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.444532  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:36.462719  585014 node_ready.go:35] waiting up to 6m0s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:36.519485  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.613714  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:07:36.613738  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:07:36.637773  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.645883  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:07:36.645907  585014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:07:36.685924  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.685952  585014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:07:36.710461  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.970231  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970256  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970563  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970589  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970599  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970606  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970860  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970881  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970892  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980520  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.980538  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.980826  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980869  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.980888  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.676577  585014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038767196s)
	I1008 19:07:37.676633  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.676646  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.676972  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.676982  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677040  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677058  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.677075  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.677333  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677351  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677375  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689600  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689615  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.689883  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.689897  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689901  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.689917  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689934  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.690210  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.690227  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.690240  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.690256  585014 addons.go:475] Verifying addon metrics-server=true in "embed-certs-783146"
	I1008 19:07:37.692035  585014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1008 19:07:36.383659  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.383993  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.384026  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:36.383939  586083 retry.go:31] will retry after 3.325274435s: waiting for machine to come up
	I1008 19:07:39.713420  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.713902  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Found IP for machine: 192.168.50.213
	I1008 19:07:39.713926  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserving static IP address...
	I1008 19:07:39.713945  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has current primary IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.714332  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.714362  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserved static IP address: 192.168.50.213
	I1008 19:07:39.714382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | skip adding static IP to network mk-default-k8s-diff-port-142496 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"}
	I1008 19:07:39.714401  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Getting to WaitForSSH function...
	I1008 19:07:39.714415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for SSH to be available...
	I1008 19:07:39.716542  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.716905  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.716951  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.717025  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH client type: external
	I1008 19:07:39.717052  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa (-rw-------)
	I1008 19:07:39.717111  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:39.717147  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | About to run SSH command:
	I1008 19:07:39.717165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | exit 0
	I1008 19:07:39.842089  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:39.842499  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetConfigRaw
	I1008 19:07:39.843125  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:39.845604  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.845976  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.846008  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.846276  585096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:07:39.846509  585096 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:39.846541  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:39.846768  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.849107  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849411  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.849435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849743  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.849924  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850084  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850236  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.850422  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.850679  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.850695  585096 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:39.950481  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:39.950507  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.950796  585096 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142496"
	I1008 19:07:39.950825  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.951016  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.953300  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.953678  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953833  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.954002  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954168  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954297  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.954450  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.954621  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.954636  585096 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142496 && echo "default-k8s-diff-port-142496" | sudo tee /etc/hostname
	I1008 19:07:40.068848  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142496
	
	I1008 19:07:40.068876  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.071855  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072195  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.072226  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072392  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.072563  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072746  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072871  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.073039  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.073237  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.073257  585096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:40.183039  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:40.183073  585096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:40.183116  585096 buildroot.go:174] setting up certificates
	I1008 19:07:40.183131  585096 provision.go:84] configureAuth start
	I1008 19:07:40.183146  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:40.183451  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:40.185904  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186264  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.186284  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186453  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.188672  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.189037  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189134  585096 provision.go:143] copyHostCerts
	I1008 19:07:40.189204  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:40.189217  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:40.189281  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:40.189427  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:40.189441  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:40.189474  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:40.189563  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:40.189573  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:40.189600  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:40.189679  585096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142496 san=[127.0.0.1 192.168.50.213 default-k8s-diff-port-142496 localhost minikube]
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:37.693230  585014 addons.go:510] duration metric: took 1.421218857s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1008 19:07:38.466754  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:40.967492  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:40.418969  585096 provision.go:177] copyRemoteCerts
	I1008 19:07:40.419032  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:40.419060  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.421382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421701  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.421730  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421912  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.422108  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.422287  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.422426  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.500533  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:40.524199  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 19:07:40.547495  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:07:40.570656  585096 provision.go:87] duration metric: took 387.509086ms to configureAuth
	I1008 19:07:40.570687  585096 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:40.570859  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:40.570934  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.573578  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.573941  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.573970  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.574088  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.574290  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574534  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574680  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.574881  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.575056  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.575074  585096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:40.795575  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:40.795604  585096 machine.go:96] duration metric: took 949.073836ms to provisionDockerMachine
	I1008 19:07:40.795618  585096 start.go:293] postStartSetup for "default-k8s-diff-port-142496" (driver="kvm2")
	I1008 19:07:40.795629  585096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:40.795646  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:40.796003  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:40.796042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.798307  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798635  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.798666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798881  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.799039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.799249  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.799369  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.880470  585096 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:40.884632  585096 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:40.884660  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:40.884719  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:40.884834  585096 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:40.884947  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:40.893828  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:40.917278  585096 start.go:296] duration metric: took 121.644332ms for postStartSetup
	I1008 19:07:40.917320  585096 fix.go:56] duration metric: took 19.950206082s for fixHost
	I1008 19:07:40.917342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.919971  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920315  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.920342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920539  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.920782  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.920969  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.921114  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.921292  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.921519  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.921535  585096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:41.022573  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414460.977520721
	
	I1008 19:07:41.022596  585096 fix.go:216] guest clock: 1728414460.977520721
	I1008 19:07:41.022603  585096 fix.go:229] Guest: 2024-10-08 19:07:40.977520721 +0000 UTC Remote: 2024-10-08 19:07:40.917324605 +0000 UTC m=+230.557951471 (delta=60.196116ms)
	I1008 19:07:41.022627  585096 fix.go:200] guest clock delta is within tolerance: 60.196116ms
	I1008 19:07:41.022634  585096 start.go:83] releasing machines lock for "default-k8s-diff-port-142496", held for 20.055558507s
	I1008 19:07:41.022665  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.022896  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:41.025861  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026272  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.026301  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026479  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027126  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027537  585096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:41.027581  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.027725  585096 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:41.027749  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.030474  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.030745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031094  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031123  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031148  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031322  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031511  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031572  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031827  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.031883  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.135368  585096 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:41.141492  585096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:41.288617  585096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:41.295482  585096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:41.295550  585096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:41.310709  585096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:41.310738  585096 start.go:495] detecting cgroup driver to use...
	I1008 19:07:41.310821  585096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:41.328574  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:41.342506  585096 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:41.342564  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:41.356308  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:41.372510  585096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:41.497084  585096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:41.665187  585096 docker.go:233] disabling docker service ...
	I1008 19:07:41.665272  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:41.682309  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:41.702567  585096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:41.882727  585096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:42.006479  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:42.020474  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:42.039750  585096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:42.039834  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.050395  585096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:42.050449  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.060572  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.071974  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.083208  585096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:42.097166  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.110090  585096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.128424  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.139296  585096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:42.148278  585096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:42.148320  585096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:42.164007  585096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:42.173218  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:42.303890  585096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:42.412074  585096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:42.412155  585096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:42.418606  585096 start.go:563] Will wait 60s for crictl version
	I1008 19:07:42.418662  585096 ssh_runner.go:195] Run: which crictl
	I1008 19:07:42.422670  585096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:42.469322  585096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:42.469432  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.501089  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.530412  585096 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:42.531554  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:42.534587  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.534928  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:42.534968  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.535235  585096 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:42.539279  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:42.552259  585096 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:42.552380  585096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:42.552447  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:42.588849  585096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:42.588928  585096 ssh_runner.go:195] Run: which lz4
	I1008 19:07:42.592785  585096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:42.597089  585096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:42.597119  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:44.003959  585096 crio.go:462] duration metric: took 1.411213503s to copy over tarball
	I1008 19:07:44.004075  585096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:43.467315  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:43.975147  585014 node_ready.go:49] node "embed-certs-783146" has status "Ready":"True"
	I1008 19:07:43.975180  585014 node_ready.go:38] duration metric: took 7.512429362s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:43.975194  585014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:43.982537  585014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999539  585014 pod_ready.go:93] pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:43.999566  585014 pod_ready.go:82] duration metric: took 16.995034ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999578  585014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506007  585014 pod_ready.go:93] pod "etcd-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:44.506032  585014 pod_ready.go:82] duration metric: took 506.447262ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506043  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.159478  585096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155358377s)
	I1008 19:07:46.159512  585096 crio.go:469] duration metric: took 2.155494994s to extract the tarball
	I1008 19:07:46.159532  585096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:46.196073  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:46.239224  585096 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:46.239253  585096 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:46.239263  585096 kubeadm.go:934] updating node { 192.168.50.213 8444 v1.31.1 crio true true} ...
	I1008 19:07:46.239412  585096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:46.239482  585096 ssh_runner.go:195] Run: crio config
	I1008 19:07:46.284916  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:46.284941  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:46.284959  585096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:46.284980  585096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142496 NodeName:default-k8s-diff-port-142496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:46.285145  585096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142496"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:46.285218  585096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:46.295176  585096 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:46.295278  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:46.304340  585096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1008 19:07:46.320234  585096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:46.336215  585096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 19:07:46.352435  585096 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:46.355991  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:46.367424  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:46.491070  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:46.509165  585096 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496 for IP: 192.168.50.213
	I1008 19:07:46.509192  585096 certs.go:194] generating shared ca certs ...
	I1008 19:07:46.509213  585096 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:46.509413  585096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:46.509488  585096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:46.509507  585096 certs.go:256] generating profile certs ...
	I1008 19:07:46.509642  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/client.key
	I1008 19:07:46.509724  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key.8b79a92b
	I1008 19:07:46.509806  585096 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key
	I1008 19:07:46.510014  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:46.510069  585096 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:46.510082  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:46.510109  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:46.510154  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:46.510177  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:46.510220  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:46.510965  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:46.548979  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:46.588042  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:46.617201  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:46.645499  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 19:07:46.673075  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:46.705336  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:46.727739  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:07:46.755352  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:46.782421  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:46.804813  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:46.827321  585096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:46.843375  585096 ssh_runner.go:195] Run: openssl version
	I1008 19:07:46.848936  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:46.860851  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865320  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865379  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.871107  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:46.881518  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:46.891868  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.895991  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.896026  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.901219  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:46.914282  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:46.925095  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929407  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929465  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.934778  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:46.946807  585096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:46.951173  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:46.957072  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:46.962822  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:46.968584  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:46.974679  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:46.980081  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:46.985537  585096 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:46.985659  585096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:46.985706  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.025838  585096 cri.go:89] found id: ""
	I1008 19:07:47.025924  585096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:47.037778  585096 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:47.037800  585096 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:47.037847  585096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:47.049787  585096 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:47.050778  585096 kubeconfig.go:125] found "default-k8s-diff-port-142496" server: "https://192.168.50.213:8444"
	I1008 19:07:47.052921  585096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:47.062696  585096 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I1008 19:07:47.062747  585096 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:47.062775  585096 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:47.062822  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.101981  585096 cri.go:89] found id: ""
	I1008 19:07:47.102054  585096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:47.119421  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:47.129168  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:47.129189  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:47.129253  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:07:47.138071  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:47.138125  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:47.147202  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:07:47.155923  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:47.155979  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:47.164829  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.173366  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:47.173413  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.182417  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:07:47.191170  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:47.191228  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:47.200115  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:47.209146  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:47.314572  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.318198  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003546788s)
	I1008 19:07:48.318245  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.533505  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.617977  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.743670  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:48.743782  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.244765  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.744287  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.243920  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:46.513648  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:49.013409  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:50.422334  585014 pod_ready.go:93] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.422364  585014 pod_ready.go:82] duration metric: took 5.916314463s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.422379  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929739  585014 pod_ready.go:93] pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.929775  585014 pod_ready.go:82] duration metric: took 507.386631ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929790  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935612  585014 pod_ready.go:93] pod "kube-proxy-9l7t7" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.935638  585014 pod_ready.go:82] duration metric: took 5.84081ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935650  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941106  585014 pod_ready.go:93] pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.941131  585014 pod_ready.go:82] duration metric: took 5.47259ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941143  585014 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:50.744524  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.830124  585096 api_server.go:72] duration metric: took 2.086461343s to wait for apiserver process to appear ...
	I1008 19:07:50.830161  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:50.830192  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:50.830915  585096 api_server.go:269] stopped: https://192.168.50.213:8444/healthz: Get "https://192.168.50.213:8444/healthz": dial tcp 192.168.50.213:8444: connect: connection refused
	I1008 19:07:51.331031  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.027442  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.027468  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.027483  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.101043  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.101073  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.330385  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.335009  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.335035  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:54.830407  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.835912  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.835939  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:55.330454  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:55.336271  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:07:55.343556  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:55.343586  585096 api_server.go:131] duration metric: took 4.513416619s to wait for apiserver health ...
	I1008 19:07:55.343604  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:55.343612  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:55.345259  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:55.346612  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:55.357899  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:55.383903  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:52.948407  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:55.449059  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:07:55.397903  585096 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:55.397935  585096 system_pods.go:61] "coredns-7c65d6cfc9-tkg8j" [0b436a1f-2b8e-4a5f-8063-695480275f2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:55.397944  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [cc702ae5-7e74-4a18-942e-1d236d39c43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:55.397952  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [da72d2f3-aab5-42c3-9733-7c0ce470e61e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:55.397959  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [de964717-b4de-4c7c-a9b5-164e7a048d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:55.397966  585096 system_pods.go:61] "kube-proxy-lwggr" [d5d96599-c3d3-4eba-a2ad-0c027e8ef1ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:55.397971  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [9218d69d-97ca-4680-856b-95c43fa371ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:55.397976  585096 system_pods.go:61] "metrics-server-6867b74b74-pfc2c" [9bafd6da-a33e-4182-a0d7-5e4c9473f057] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:55.397982  585096 system_pods.go:61] "storage-provisioner" [b60980ab-2552-404e-b351-4b163a075732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:55.397988  585096 system_pods.go:74] duration metric: took 14.056648ms to wait for pod list to return data ...
	I1008 19:07:55.397997  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:55.403870  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:55.403906  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:55.403920  585096 node_conditions.go:105] duration metric: took 5.917994ms to run NodePressure ...
	I1008 19:07:55.403941  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:55.677555  585096 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682514  585096 kubeadm.go:739] kubelet initialised
	I1008 19:07:55.682539  585096 kubeadm.go:740] duration metric: took 4.953783ms waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682550  585096 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:55.688641  585096 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:57.695361  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.195582  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:57.948167  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.446946  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.834691  584371 start.go:364] duration metric: took 54.903126319s to acquireMachinesLock for "no-preload-966632"
	I1008 19:08:01.834745  584371 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:08:01.834753  584371 fix.go:54] fixHost starting: 
	I1008 19:08:01.835158  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:01.835200  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:01.854850  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1008 19:08:01.855220  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:01.855740  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:01.855770  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:01.856201  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:01.856428  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:01.856587  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:01.857921  584371 fix.go:112] recreateIfNeeded on no-preload-966632: state=Stopped err=<nil>
	I1008 19:08:01.857943  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	W1008 19:08:01.858110  584371 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:08:01.859994  584371 out.go:177] * Restarting existing kvm2 VM for "no-preload-966632" ...
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:01.861355  584371 main.go:141] libmachine: (no-preload-966632) Calling .Start
	I1008 19:08:01.861539  584371 main.go:141] libmachine: (no-preload-966632) Ensuring networks are active...
	I1008 19:08:01.862455  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network default is active
	I1008 19:08:01.862878  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network mk-no-preload-966632 is active
	I1008 19:08:01.863368  584371 main.go:141] libmachine: (no-preload-966632) Getting domain xml...
	I1008 19:08:01.864106  584371 main.go:141] libmachine: (no-preload-966632) Creating domain...
	I1008 19:08:03.179854  584371 main.go:141] libmachine: (no-preload-966632) Waiting to get IP...
	I1008 19:08:03.180838  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.181232  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.181301  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.181206  586496 retry.go:31] will retry after 229.567854ms: waiting for machine to come up
	I1008 19:08:03.412710  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.413201  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.413225  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.413170  586496 retry.go:31] will retry after 361.675143ms: waiting for machine to come up
	I1008 19:08:03.776466  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.777140  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.777184  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.777047  586496 retry.go:31] will retry after 323.194852ms: waiting for machine to come up
	I1008 19:08:04.101865  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.102357  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.102388  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.102310  586496 retry.go:31] will retry after 484.995282ms: waiting for machine to come up
	I1008 19:08:02.698935  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:05.195930  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:02.447582  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:04.450889  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:04.589063  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.589752  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.589780  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.589706  586496 retry.go:31] will retry after 543.703113ms: waiting for machine to come up
	I1008 19:08:05.135522  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.135997  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.136023  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.135944  586496 retry.go:31] will retry after 617.479763ms: waiting for machine to come up
	I1008 19:08:05.754978  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.755541  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.755568  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.755486  586496 retry.go:31] will retry after 849.017716ms: waiting for machine to come up
	I1008 19:08:06.606621  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:06.607072  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:06.607105  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:06.607023  586496 retry.go:31] will retry after 1.133489837s: waiting for machine to come up
	I1008 19:08:07.742713  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:07.743299  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:07.743329  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:07.743252  586496 retry.go:31] will retry after 1.797316795s: waiting for machine to come up
	I1008 19:08:07.196317  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.698409  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.698443  585096 pod_ready.go:82] duration metric: took 12.009772792s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.698475  585096 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.708991  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.709015  585096 pod_ready.go:82] duration metric: took 10.527401ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.709028  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714343  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.714369  585096 pod_ready.go:82] duration metric: took 5.331417ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714383  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.118973  585096 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:06.948829  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:09.448376  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:09.541879  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:09.542340  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:09.542372  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:09.542288  586496 retry.go:31] will retry after 2.238590286s: waiting for machine to come up
	I1008 19:08:11.783440  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:11.783909  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:11.783945  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:11.783858  586496 retry.go:31] will retry after 2.226110801s: waiting for machine to come up
	I1008 19:08:14.012103  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:14.012538  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:14.012561  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:14.012493  586496 retry.go:31] will retry after 2.298206633s: waiting for machine to come up
	I1008 19:08:10.849833  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.849856  585096 pod_ready.go:82] duration metric: took 3.13546554s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.849868  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858341  585096 pod_ready.go:93] pod "kube-proxy-lwggr" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.858367  585096 pod_ready.go:82] duration metric: took 8.492572ms for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858379  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865890  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.865909  585096 pod_ready.go:82] duration metric: took 7.521945ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865918  585096 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:12.873861  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:15.372408  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.450482  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:13.948331  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.313868  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:16.314460  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:16.314484  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:16.314424  586496 retry.go:31] will retry after 3.672085858s: waiting for machine to come up
	I1008 19:08:17.872689  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.372637  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.448090  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:18.947580  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.948804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.989014  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989556  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has current primary IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989576  584371 main.go:141] libmachine: (no-preload-966632) Found IP for machine: 192.168.61.141
	I1008 19:08:19.989589  584371 main.go:141] libmachine: (no-preload-966632) Reserving static IP address...
	I1008 19:08:19.990000  584371 main.go:141] libmachine: (no-preload-966632) Reserved static IP address: 192.168.61.141
	I1008 19:08:19.990036  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.990048  584371 main.go:141] libmachine: (no-preload-966632) Waiting for SSH to be available...
	I1008 19:08:19.990068  584371 main.go:141] libmachine: (no-preload-966632) DBG | skip adding static IP to network mk-no-preload-966632 - found existing host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"}
	I1008 19:08:19.990076  584371 main.go:141] libmachine: (no-preload-966632) DBG | Getting to WaitForSSH function...
	I1008 19:08:19.992644  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.992970  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.993010  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.993081  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH client type: external
	I1008 19:08:19.993104  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa (-rw-------)
	I1008 19:08:19.993136  584371 main.go:141] libmachine: (no-preload-966632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:19.993152  584371 main.go:141] libmachine: (no-preload-966632) DBG | About to run SSH command:
	I1008 19:08:19.993174  584371 main.go:141] libmachine: (no-preload-966632) DBG | exit 0
	I1008 19:08:20.118205  584371 main.go:141] libmachine: (no-preload-966632) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:20.118616  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetConfigRaw
	I1008 19:08:20.119326  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.122203  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122678  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.122708  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122926  584371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 19:08:20.123144  584371 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:20.123164  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:20.123360  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.125759  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126083  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.126108  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126265  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.126442  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.126980  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.127189  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.127201  584371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:20.234458  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:20.234491  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.234781  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:08:20.234811  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.235044  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.237673  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.237993  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.238016  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.238221  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.238418  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238612  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238806  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.238981  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.239176  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.239203  584371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-966632 && echo "no-preload-966632" | sudo tee /etc/hostname
	I1008 19:08:20.360621  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-966632
	
	I1008 19:08:20.360649  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.363600  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.363909  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.363947  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.364166  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.364297  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364426  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364510  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.364630  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.364855  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.364881  584371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-966632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-966632/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-966632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:20.483101  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:20.483131  584371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:20.483149  584371 buildroot.go:174] setting up certificates
	I1008 19:08:20.483161  584371 provision.go:84] configureAuth start
	I1008 19:08:20.483171  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.483429  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.486467  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.486838  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.486871  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.487037  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.489207  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489531  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.489557  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489655  584371 provision.go:143] copyHostCerts
	I1008 19:08:20.489726  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:20.489737  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:20.489803  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:20.489927  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:20.489939  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:20.489987  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:20.490072  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:20.490083  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:20.490110  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:20.490231  584371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.no-preload-966632 san=[127.0.0.1 192.168.61.141 localhost minikube no-preload-966632]
	I1008 19:08:20.618050  584371 provision.go:177] copyRemoteCerts
	I1008 19:08:20.618117  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:20.618149  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.621118  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621458  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.621485  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621670  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.621875  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.622056  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.622224  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:20.704439  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:20.730441  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:08:20.755072  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:08:20.777513  584371 provision.go:87] duration metric: took 294.340685ms to configureAuth
	I1008 19:08:20.777550  584371 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:20.777774  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:08:20.777873  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.780540  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.780956  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.780995  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.781185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.781423  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781615  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.781989  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.782179  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.782203  584371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:21.003896  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:21.003925  584371 machine.go:96] duration metric: took 880.766243ms to provisionDockerMachine
	I1008 19:08:21.003940  584371 start.go:293] postStartSetup for "no-preload-966632" (driver="kvm2")
	I1008 19:08:21.003955  584371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:21.003974  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.004286  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:21.004312  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.007138  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007472  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.007500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007610  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.007820  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.007991  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.008163  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.093075  584371 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:21.097048  584371 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:21.097076  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:21.097160  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:21.097254  584371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:21.097370  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:21.106698  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:21.130484  584371 start.go:296] duration metric: took 126.530716ms for postStartSetup
	I1008 19:08:21.130526  584371 fix.go:56] duration metric: took 19.295774496s for fixHost
	I1008 19:08:21.130550  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.133361  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.133717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.133744  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.134048  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.134269  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134525  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134710  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.134888  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:21.135119  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:21.135135  584371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:21.242740  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414501.194174379
	
	I1008 19:08:21.242765  584371 fix.go:216] guest clock: 1728414501.194174379
	I1008 19:08:21.242776  584371 fix.go:229] Guest: 2024-10-08 19:08:21.194174379 +0000 UTC Remote: 2024-10-08 19:08:21.130530022 +0000 UTC m=+356.786912807 (delta=63.644357ms)
	I1008 19:08:21.242823  584371 fix.go:200] guest clock delta is within tolerance: 63.644357ms
	I1008 19:08:21.242835  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 19.408108613s
	I1008 19:08:21.242857  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.243112  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:21.245967  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246378  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.246409  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246731  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247314  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247500  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247588  584371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:21.247640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.247706  584371 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:21.247731  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.250191  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250228  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250665  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250694  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250729  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250789  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250948  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250962  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251129  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251314  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.251334  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251462  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.353600  584371 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:21.360031  584371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:21.502001  584371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:21.508846  584371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:21.508938  584371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:21.524597  584371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:21.524626  584371 start.go:495] detecting cgroup driver to use...
	I1008 19:08:21.524699  584371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:21.541500  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:21.553886  584371 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:21.553943  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:21.567027  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:21.579965  584371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:21.692823  584371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:21.844393  584371 docker.go:233] disabling docker service ...
	I1008 19:08:21.844461  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:21.860471  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:21.873229  584371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:22.003106  584371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:22.129301  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:22.143314  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:22.161423  584371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:08:22.161494  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.171355  584371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:22.171429  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.180962  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.190212  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.199737  584371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:22.209488  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.219051  584371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.235430  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.245007  584371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:22.253705  584371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:22.253748  584371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:22.265343  584371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:22.275245  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:22.380960  584371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:22.471004  584371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:22.471067  584371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:22.475520  584371 start.go:563] Will wait 60s for crictl version
	I1008 19:08:22.475598  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.479271  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:22.523709  584371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:22.523787  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.551307  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.579271  584371 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:08:22.580608  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:22.583417  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583783  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:22.583825  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583991  584371 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:22.587937  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:22.600324  584371 kubeadm.go:883] updating cluster {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:22.600465  584371 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:08:22.600506  584371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:22.641111  584371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:08:22.641139  584371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:22.641194  584371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.641224  584371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.641284  584371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.641307  584371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.641377  584371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.641407  584371 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 19:08:22.641742  584371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642057  584371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.642568  584371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.642576  584371 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 19:08:22.642669  584371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.642876  584371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.642894  584371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.643310  584371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.799972  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.811504  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.815340  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.815659  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.817303  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.858380  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.864688  584371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1008 19:08:22.864727  584371 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.864762  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.877332  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1008 19:08:22.934971  584371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1008 19:08:22.935035  584371 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.935085  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945549  584371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1008 19:08:22.945594  584371 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.945644  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945645  584371 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1008 19:08:22.945683  584371 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.945685  584371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1008 19:08:22.945730  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945733  584371 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.945796  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981887  584371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1008 19:08:22.982012  584371 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.982059  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981954  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.082208  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.082210  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.082304  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.082411  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.082430  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.082543  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.178344  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.196633  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.196665  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.196733  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.209763  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.209830  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.310142  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.317659  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.317731  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.327221  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.331490  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.346298  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 19:08:23.346412  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.435656  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 19:08:23.435679  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1008 19:08:23.435783  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:23.435788  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:23.441591  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 19:08:23.441673  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:23.441696  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 19:08:23.441782  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 19:08:23.441814  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:23.441856  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:23.441901  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1008 19:08:23.441918  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.441947  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.445597  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1008 19:08:23.445630  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1008 19:08:23.449022  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1008 19:08:23.450009  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.373452  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:24.872600  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:23.448074  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:25.449287  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.950280  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.508431356s)
	I1008 19:08:25.950340  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.508402491s)
	I1008 19:08:25.950344  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1008 19:08:25.950357  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1008 19:08:25.950545  584371 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.50050623s)
	I1008 19:08:25.950600  584371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1008 19:08:25.950611  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.508516442s)
	I1008 19:08:25.950637  584371 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:25.950648  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1008 19:08:25.950680  584371 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:25.950688  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:25.950727  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:29.225357  584371 ssh_runner.go:235] Completed: which crictl: (3.274648192s)
	I1008 19:08:29.225514  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:29.225532  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.27477814s)
	I1008 19:08:29.225561  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1008 19:08:29.225593  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:29.225627  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:27.373617  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.374173  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:27.948313  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.948750  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.696201  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.470655089s)
	I1008 19:08:30.696255  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.470604601s)
	I1008 19:08:30.696284  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1008 19:08:30.696296  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:30.696317  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.696365  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.740520  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:32.685896  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.989500601s)
	I1008 19:08:32.685941  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1008 19:08:32.685971  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.685971  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.945412846s)
	I1008 19:08:32.686046  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.686045  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 19:08:32.686186  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:31.872718  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:33.873665  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:32.447765  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:34.948257  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.663874  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.977781248s)
	I1008 19:08:34.663914  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1008 19:08:34.663939  584371 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:34.663942  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.977724244s)
	I1008 19:08:34.663973  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1008 19:08:34.663991  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:36.833283  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.169263327s)
	I1008 19:08:36.833320  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1008 19:08:36.833353  584371 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:36.833417  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:37.485901  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 19:08:37.485954  584371 cache_images.go:123] Successfully loaded all cached images
	I1008 19:08:37.485961  584371 cache_images.go:92] duration metric: took 14.844810749s to LoadCachedImages
	I1008 19:08:37.485973  584371 kubeadm.go:934] updating node { 192.168.61.141 8443 v1.31.1 crio true true} ...
	I1008 19:08:37.486084  584371 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-966632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:37.486149  584371 ssh_runner.go:195] Run: crio config
	I1008 19:08:37.544511  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:37.544535  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:37.544554  584371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:37.544576  584371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-966632 NodeName:no-preload-966632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:08:37.544718  584371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-966632"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:37.544792  584371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:08:37.556979  584371 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:37.557049  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:37.566249  584371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 19:08:37.583303  584371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:37.599535  584371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1008 19:08:37.616315  584371 ssh_runner.go:195] Run: grep 192.168.61.141	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:37.620089  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:37.632181  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:37.748647  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:37.765577  584371 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632 for IP: 192.168.61.141
	I1008 19:08:37.765600  584371 certs.go:194] generating shared ca certs ...
	I1008 19:08:37.765619  584371 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:37.765829  584371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:37.765890  584371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:37.765904  584371 certs.go:256] generating profile certs ...
	I1008 19:08:37.766020  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.key
	I1008 19:08:37.766095  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key.a515ed11
	I1008 19:08:37.766143  584371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key
	I1008 19:08:37.766334  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:37.766383  584371 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:37.766398  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:37.766430  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:37.766467  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:37.766501  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:37.766562  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:37.767588  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:37.804400  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:37.837466  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:37.865516  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:37.894827  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:08:37.918668  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:08:37.948238  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:37.974152  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:08:37.997284  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:38.019295  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:38.043392  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:38.067971  584371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:38.084940  584371 ssh_runner.go:195] Run: openssl version
	I1008 19:08:38.090779  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:38.102715  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107292  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107355  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.113456  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:38.123904  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:38.134337  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138503  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138561  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.143902  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:38.155393  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:38.167107  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171433  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171480  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.176968  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:38.188437  584371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:38.192733  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:38.198531  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:38.204187  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:38.210522  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:38.216328  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:38.222077  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:38.227724  584371 kubeadm.go:392] StartCluster: {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:38.227802  584371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:38.227882  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.262461  584371 cri.go:89] found id: ""
	I1008 19:08:38.262532  584371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:38.272591  584371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:38.272612  584371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:38.272677  584371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:38.282621  584371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:38.283683  584371 kubeconfig.go:125] found "no-preload-966632" server: "https://192.168.61.141:8443"
	I1008 19:08:38.286019  584371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:38.295315  584371 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.141
	I1008 19:08:38.295344  584371 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:38.295357  584371 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:38.295400  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.329462  584371 cri.go:89] found id: ""
	I1008 19:08:38.329533  584371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:38.345901  584371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:38.354899  584371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:38.354920  584371 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:38.354965  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:38.363242  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:38.363282  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:38.373063  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:38.381479  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:38.381530  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:38.390679  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.400033  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:38.400071  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.409308  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:38.417842  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:38.417876  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:38.427251  584371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:38.437010  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:38.562381  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.344247  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:36.372911  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:38.872768  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:37.448043  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:39.956579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.550458  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.619345  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.718016  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:39.718126  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.218974  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.719108  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.741178  584371 api_server.go:72] duration metric: took 1.023163924s to wait for apiserver process to appear ...
	I1008 19:08:40.741210  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:08:40.741235  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:40.741767  584371 api_server.go:269] stopped: https://192.168.61.141:8443/healthz: Get "https://192.168.61.141:8443/healthz": dial tcp 192.168.61.141:8443: connect: connection refused
	I1008 19:08:41.241356  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.787235  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:08:43.787284  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:08:43.787306  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.914606  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:43.914653  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:44.242033  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.247068  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.247097  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:40.873394  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:43.373475  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:42.446900  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:44.447141  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.742212  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.756340  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.756371  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.241997  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.246343  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.246367  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.741898  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.749274  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.749301  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.241889  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.246127  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.246155  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.741694  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.746192  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.746219  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:47.242250  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:47.246571  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:08:47.252812  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:08:47.252843  584371 api_server.go:131] duration metric: took 6.511626175s to wait for apiserver health ...
	I1008 19:08:47.252852  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:47.252858  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:47.254723  584371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:08:47.255933  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:08:47.266073  584371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:08:47.284042  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:08:47.293401  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:08:47.293432  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:08:47.293439  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:08:47.293450  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:08:47.293456  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:08:47.293464  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:08:47.293469  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:08:47.293474  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:08:47.293478  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:08:47.293484  584371 system_pods.go:74] duration metric: took 9.422158ms to wait for pod list to return data ...
	I1008 19:08:47.293493  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:08:47.296923  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:08:47.296947  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:08:47.296960  584371 node_conditions.go:105] duration metric: took 3.462212ms to run NodePressure ...
	I1008 19:08:47.296979  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:47.562271  584371 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566914  584371 kubeadm.go:739] kubelet initialised
	I1008 19:08:47.566938  584371 kubeadm.go:740] duration metric: took 4.63692ms waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566950  584371 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:47.571271  584371 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.575633  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575659  584371 pod_ready.go:82] duration metric: took 4.364181ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.575671  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575680  584371 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.579443  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579465  584371 pod_ready.go:82] duration metric: took 3.775248ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.579475  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579483  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.583747  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583775  584371 pod_ready.go:82] duration metric: took 4.277306ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.583785  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583797  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.687618  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687652  584371 pod_ready.go:82] duration metric: took 103.843425ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.687663  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687669  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.087568  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087601  584371 pod_ready.go:82] duration metric: took 399.92202ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.087613  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087622  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.487223  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487256  584371 pod_ready.go:82] duration metric: took 399.625038ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.487269  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487278  584371 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.887764  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887798  584371 pod_ready.go:82] duration metric: took 400.504473ms for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.887812  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887821  584371 pod_ready.go:39] duration metric: took 1.320859293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:48.887842  584371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:08:48.901255  584371 ops.go:34] apiserver oom_adj: -16
	I1008 19:08:48.901279  584371 kubeadm.go:597] duration metric: took 10.628659432s to restartPrimaryControlPlane
	I1008 19:08:48.901290  584371 kubeadm.go:394] duration metric: took 10.673572592s to StartCluster
	I1008 19:08:48.901313  584371 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.901397  584371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:48.904024  584371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.904361  584371 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:08:48.904455  584371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:08:48.904549  584371 addons.go:69] Setting storage-provisioner=true in profile "no-preload-966632"
	I1008 19:08:48.904565  584371 addons.go:69] Setting default-storageclass=true in profile "no-preload-966632"
	I1008 19:08:48.904594  584371 addons.go:234] Setting addon storage-provisioner=true in "no-preload-966632"
	W1008 19:08:48.904603  584371 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:08:48.904603  584371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-966632"
	I1008 19:08:48.904574  584371 addons.go:69] Setting metrics-server=true in profile "no-preload-966632"
	I1008 19:08:48.904646  584371 addons.go:234] Setting addon metrics-server=true in "no-preload-966632"
	I1008 19:08:48.904651  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.904652  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1008 19:08:48.904670  584371 addons.go:243] addon metrics-server should already be in state true
	I1008 19:08:48.904705  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.905079  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905116  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905133  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905151  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905159  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905205  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.906774  584371 out.go:177] * Verifying Kubernetes components...
	I1008 19:08:48.908138  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:48.942865  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1008 19:08:48.943612  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.944201  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.944232  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.944667  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.944748  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1008 19:08:48.945485  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.945526  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.945763  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.946464  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.946484  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.946530  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1008 19:08:48.946935  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.947052  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.947649  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.947693  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.948006  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.948027  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.948379  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.948602  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.951770  584371 addons.go:234] Setting addon default-storageclass=true in "no-preload-966632"
	W1008 19:08:48.951788  584371 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:08:48.951819  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.952055  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.952095  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.962422  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1008 19:08:48.962931  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.963509  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.963532  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.963908  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.964117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.965879  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.967812  584371 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:08:48.967853  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1008 19:08:48.967817  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1008 19:08:48.968376  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968436  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968885  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.968906  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.968964  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:08:48.968986  584371 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:08:48.969010  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.969290  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.969449  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.969472  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.969910  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.969941  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.970187  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.970430  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.972100  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972523  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.972544  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972677  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.972735  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.973016  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.973191  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.973323  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.974390  584371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:48.975651  584371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:48.975670  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:08:48.975686  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.978500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.978855  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.978876  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.979079  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.979474  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.979640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.979766  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.994846  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1008 19:08:48.995180  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.995592  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.995607  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.995976  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.996173  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.998270  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.998549  584371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:48.998568  584371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:08:48.998591  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:49.000647  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.000908  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:49.000924  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.001078  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:49.001185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:49.001282  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:49.001358  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:49.118217  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:49.138077  584371 node_ready.go:35] waiting up to 6m0s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:49.217300  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:49.241237  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:49.365395  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:08:49.365420  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:08:45.873500  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.373215  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:49.403596  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:08:49.403625  584371 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:08:49.438480  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:49.438540  584371 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:08:49.464366  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:50.474783  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.233506833s)
	I1008 19:08:50.474850  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474862  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.474914  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.257567473s)
	I1008 19:08:50.474955  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474964  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475191  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475206  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475215  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475221  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475280  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475289  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475297  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475303  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475310  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475441  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475454  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475582  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475596  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475628  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482003  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.482031  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.482315  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.482351  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482372  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.512902  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.048483922s)
	I1008 19:08:50.512957  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.512980  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513241  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513257  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513261  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513299  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.513307  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513534  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513552  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513561  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513577  584371 addons.go:475] Verifying addon metrics-server=true in "no-preload-966632"
	I1008 19:08:50.515302  584371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:08:46.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.448332  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:50.449239  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.516457  584371 addons.go:510] duration metric: took 1.612011936s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:08:51.141437  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:53.142166  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:54.141208  584371 node_ready.go:49] node "no-preload-966632" has status "Ready":"True"
	I1008 19:08:54.141238  584371 node_ready.go:38] duration metric: took 5.003121669s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:54.141251  584371 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:54.146685  584371 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151059  584371 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:54.151078  584371 pod_ready.go:82] duration metric: took 4.369406ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151086  584371 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:50.872416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:53.372230  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:52.947461  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:54.950183  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.157153  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.157458  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.658595  584371 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.658617  584371 pod_ready.go:82] duration metric: took 4.507524391s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.658627  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663785  584371 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.663811  584371 pod_ready.go:82] duration metric: took 5.176586ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663823  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668310  584371 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.668342  584371 pod_ready.go:82] duration metric: took 4.509914ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668356  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672380  584371 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.672397  584371 pod_ready.go:82] duration metric: took 4.034104ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672405  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676499  584371 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.676517  584371 pod_ready.go:82] duration metric: took 4.106343ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676527  584371 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:55.873069  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.372424  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:57.448182  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:59.947932  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.682583  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.682958  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:00.872650  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.872783  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:05.371825  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.447340  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:04.447504  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.683354  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.183362  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.183636  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.872083  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.874058  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.947502  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:08.948054  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.683833  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.188267  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:12.371967  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.372404  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.448009  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:13.948106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:15.948926  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:16.683453  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.184073  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:16.873511  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.372353  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:18.446864  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:20.449087  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:21.184209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.683535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:21.373869  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.872606  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:22.947224  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.947961  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:26.183587  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.184349  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:26.373049  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.873435  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.447254  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:29.447470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:30.683953  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.183412  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.371635  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.373244  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.448173  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.458099  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.947741  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:35.184447  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.682974  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.872226  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.872308  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:39.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:38.447494  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:40.947078  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:39.686907  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.183930  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.184443  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.372207  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.872360  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.947712  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:45.447011  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:46.682572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.683539  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.372618  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.373137  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.448127  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.947787  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:50.683784  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:53.183034  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.873417  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.373414  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.948470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.447675  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:55.184058  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.683581  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.872213  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.872323  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.447790  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.947901  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.948561  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:00.182341  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.183137  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:04.186590  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.872469  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.872708  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.372543  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.447536  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.947676  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.683572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.183605  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.373804  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.872253  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.947764  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.948705  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:11.184118  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:13.184173  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.372084  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.373845  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.447832  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.948882  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:15.682883  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.184458  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:16.374416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.872958  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:17.446820  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.448067  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:20.686987  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.182360  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.372104  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.872156  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.947427  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.950711  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:25.183749  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:27.184437  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.372132  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.372299  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.948117  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.948772  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:10:29.682860  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:31.683468  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:34.184018  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.872607  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:32.872784  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.373822  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:33.447573  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.447614  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:36.682470  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:38.683099  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.872270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.872490  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.450208  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.947418  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:40.683975  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.182280  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.372711  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.873233  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.446673  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.447301  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:45.188927  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.682373  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.371936  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:49.372190  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.447866  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:48.947579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.683659  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.182955  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:54.184535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.373113  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.373239  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.446974  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.447804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.947655  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:56.682535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.683610  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.872286  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:57.872418  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:59.872720  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.447354  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:00.447456  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:00.684164  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:03.182802  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.873903  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.372767  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:02.947088  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.948196  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:05.683195  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.184305  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:06.872641  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.872811  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.447814  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:09.948377  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:10.186012  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.682829  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:11.374597  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:13.872241  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.447966  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.448396  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.683559  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.185276  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.372851  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:18.872479  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.947677  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:19.449701  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:19.682652  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.683209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.684681  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.371629  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.373290  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.947319  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:24.446685  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.183070  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.185526  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:25.872866  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.371245  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.371878  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.447559  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.948355  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.949170  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:30.683714  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.182628  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.373119  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:34.872336  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.447847  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:35.947756  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:35.183021  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.682254  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.371644  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:39.372270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.447549  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:40.946928  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.182928  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.873545  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.372751  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.947699  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.949106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:44.684067  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.182248  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.183397  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:46.871983  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:48.872101  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.447284  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.448275  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.949171  585014 pod_ready.go:82] duration metric: took 4m0.008012606s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:11:50.949202  585014 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:11:50.949213  585014 pod_ready.go:39] duration metric: took 4m6.974004451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:11:50.949249  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:11:50.949290  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.949351  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.998560  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:50.998584  585014 cri.go:89] found id: ""
	I1008 19:11:50.998591  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:11:50.998649  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.003407  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:51.003490  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:51.183611  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:53.683418  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.872692  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:52.873320  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:55.372722  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:51.049315  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:51.049343  585014 cri.go:89] found id: ""
	I1008 19:11:51.049353  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:11:51.049411  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.055212  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:51.055281  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:51.101271  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.101292  585014 cri.go:89] found id: ""
	I1008 19:11:51.101300  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:11:51.101360  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.105902  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:51.105966  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:51.150355  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.150390  585014 cri.go:89] found id: ""
	I1008 19:11:51.150402  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:11:51.150468  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.155116  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:51.155193  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:51.197754  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:51.197779  585014 cri.go:89] found id: ""
	I1008 19:11:51.197790  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:11:51.197846  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.201957  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:51.202023  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:51.239982  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:51.240001  585014 cri.go:89] found id: ""
	I1008 19:11:51.240009  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:11:51.240064  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.244580  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:51.244645  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:51.280099  585014 cri.go:89] found id: ""
	I1008 19:11:51.280126  585014 logs.go:282] 0 containers: []
	W1008 19:11:51.280137  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:51.280144  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:11:51.280205  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:11:51.323467  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:51.323508  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:51.323514  585014 cri.go:89] found id: ""
	I1008 19:11:51.323525  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:11:51.323676  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.328091  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.332113  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:51.332139  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:11:51.455430  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:11:51.455463  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.492792  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:11:51.492824  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.533732  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.533768  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:52.085919  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:11:52.085972  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:52.120874  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:52.120912  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:11:52.163961  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164188  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.164330  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164489  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.195681  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:52.195716  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:52.210569  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:11:52.210601  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:52.256667  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:11:52.256700  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:52.303627  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:11:52.303685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:52.340250  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:11:52.340279  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:52.402179  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:11:52.402213  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:52.440288  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:11:52.440326  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:52.478952  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.478979  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:11:52.479043  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:11:52.479060  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479068  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.479077  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479084  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.479092  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.479101  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.182942  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:58.184307  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:57.373902  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:59.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:00.683230  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:03.184294  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.372439  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:04.871327  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.480085  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.498401  585014 api_server.go:72] duration metric: took 4m26.226421652s to wait for apiserver process to appear ...
	I1008 19:12:02.498433  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:12:02.498479  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.498544  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:02.533531  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:02.533563  585014 cri.go:89] found id: ""
	I1008 19:12:02.533575  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:02.533643  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.537914  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:02.537985  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:02.579011  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:02.579039  585014 cri.go:89] found id: ""
	I1008 19:12:02.579049  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:02.579111  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.583628  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:02.583695  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:02.625038  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.625065  585014 cri.go:89] found id: ""
	I1008 19:12:02.625075  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:02.625138  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.629262  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:02.629331  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:02.662964  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:02.662988  585014 cri.go:89] found id: ""
	I1008 19:12:02.662997  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:02.663052  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.666955  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:02.667013  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:02.704552  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:02.704578  585014 cri.go:89] found id: ""
	I1008 19:12:02.704589  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:02.704640  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.708910  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:02.708962  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:02.743196  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.743220  585014 cri.go:89] found id: ""
	I1008 19:12:02.743229  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:02.743276  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.747488  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:02.747563  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:02.789367  585014 cri.go:89] found id: ""
	I1008 19:12:02.789405  585014 logs.go:282] 0 containers: []
	W1008 19:12:02.789418  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:02.789426  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:02.789479  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:02.828607  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:02.828640  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.828646  585014 cri.go:89] found id: ""
	I1008 19:12:02.828656  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:02.828723  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.832981  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.837258  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:02.837284  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.874214  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:02.874249  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.925844  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:02.925879  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.963715  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:02.963744  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.009069  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.009102  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:03.046628  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.046816  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.046947  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.047129  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.080027  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.080068  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:03.203192  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:03.203233  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:03.254645  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:03.254681  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:03.300881  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:03.300918  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:03.347403  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.347440  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.802754  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.802801  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.816658  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:03.816695  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:03.873630  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:03.873670  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:03.910834  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.910862  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:03.910932  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:03.910946  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910955  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.910972  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910983  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.910994  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.911006  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:05.682916  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:07.683273  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:06.872011  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:08.873682  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:09.684055  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.182786  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:14.186868  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:10.866893  585096 pod_ready.go:82] duration metric: took 4m0.000956825s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:10.866937  585096 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1008 19:12:10.866961  585096 pod_ready.go:39] duration metric: took 4m15.184400794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:10.866992  585096 kubeadm.go:597] duration metric: took 4m23.829186185s to restartPrimaryControlPlane
	W1008 19:12:10.867049  585096 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:10.867092  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:13.911944  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:12:13.917530  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:12:13.918513  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:12:13.918537  585014 api_server.go:131] duration metric: took 11.420096691s to wait for apiserver health ...
	I1008 19:12:13.918546  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:12:13.918570  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:13.918621  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:13.957026  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:13.957048  585014 cri.go:89] found id: ""
	I1008 19:12:13.957057  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:13.957114  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:13.961553  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:13.961611  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:13.996466  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:13.996497  585014 cri.go:89] found id: ""
	I1008 19:12:13.996508  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:13.996570  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.000972  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:14.001036  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:14.034888  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.034917  585014 cri.go:89] found id: ""
	I1008 19:12:14.034929  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:14.034989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.039145  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:14.039216  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:14.074109  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:14.074134  585014 cri.go:89] found id: ""
	I1008 19:12:14.074145  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:14.074202  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.078291  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:14.078371  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:14.113375  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:14.113403  585014 cri.go:89] found id: ""
	I1008 19:12:14.113413  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:14.113475  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.117909  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:14.118002  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:14.153800  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:14.153823  585014 cri.go:89] found id: ""
	I1008 19:12:14.153833  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:14.153898  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.158233  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:14.158302  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:14.195093  585014 cri.go:89] found id: ""
	I1008 19:12:14.195123  585014 logs.go:282] 0 containers: []
	W1008 19:12:14.195133  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:14.195142  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:14.195203  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:14.230894  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:14.230917  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:14.230921  585014 cri.go:89] found id: ""
	I1008 19:12:14.230929  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:14.230989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.236299  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.240914  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:14.240940  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:14.282289  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282488  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:14.282643  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282824  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:14.315207  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:14.315235  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:14.433616  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:14.433647  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:14.482640  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:14.482685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.524749  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:14.524788  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:14.979562  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:14.979629  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:15.016898  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:15.016941  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:15.058447  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:15.058478  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:15.114345  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:15.114384  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:15.128920  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:15.128948  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:15.176775  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:15.176817  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:15.215091  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:15.215129  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:15.256687  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:15.256731  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:15.311551  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311583  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:15.311641  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:15.311653  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311664  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:15.311676  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311681  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:15.311687  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311695  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:16.682464  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:19.182595  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:21.184074  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:23.682867  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:25.319305  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:12:25.319336  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.319340  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.319344  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.319348  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.319351  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.319354  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.319362  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.319365  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.319371  585014 system_pods.go:74] duration metric: took 11.400819931s to wait for pod list to return data ...
	I1008 19:12:25.319378  585014 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:12:25.322115  585014 default_sa.go:45] found service account: "default"
	I1008 19:12:25.322135  585014 default_sa.go:55] duration metric: took 2.751457ms for default service account to be created ...
	I1008 19:12:25.322143  585014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:12:25.326570  585014 system_pods.go:86] 8 kube-system pods found
	I1008 19:12:25.326590  585014 system_pods.go:89] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.326595  585014 system_pods.go:89] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.326599  585014 system_pods.go:89] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.326604  585014 system_pods.go:89] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.326610  585014 system_pods.go:89] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.326615  585014 system_pods.go:89] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.326625  585014 system_pods.go:89] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.326633  585014 system_pods.go:89] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.326642  585014 system_pods.go:126] duration metric: took 4.494323ms to wait for k8s-apps to be running ...
	I1008 19:12:25.326651  585014 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:12:25.326701  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:25.344597  585014 system_svc.go:56] duration metric: took 17.941012ms WaitForService to wait for kubelet
	I1008 19:12:25.344621  585014 kubeadm.go:582] duration metric: took 4m49.072648847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:12:25.344638  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:12:25.347385  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:12:25.347404  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:12:25.347425  585014 node_conditions.go:105] duration metric: took 2.783181ms to run NodePressure ...
	I1008 19:12:25.347437  585014 start.go:241] waiting for startup goroutines ...
	I1008 19:12:25.347450  585014 start.go:246] waiting for cluster config update ...
	I1008 19:12:25.347463  585014 start.go:255] writing updated cluster config ...
	I1008 19:12:25.347823  585014 ssh_runner.go:195] Run: rm -f paused
	I1008 19:12:25.395903  585014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:12:25.397911  585014 out.go:177] * Done! kubectl is now configured to use "embed-certs-783146" cluster and "default" namespace by default
	I1008 19:12:25.683645  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:28.182995  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:30.183567  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:32.682881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.013046  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.145916528s)
	I1008 19:12:37.013156  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:37.028010  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:37.037493  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:37.046435  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:37.046455  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:37.046495  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:12:37.055422  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:37.055482  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:37.064538  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:12:37.072968  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:37.073021  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:37.081754  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.090143  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:37.090179  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.098726  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:12:37.107261  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:37.107308  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:37.115975  585096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:37.163570  585096 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:12:37.163642  585096 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:37.272891  585096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:37.273025  585096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:37.273151  585096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:12:37.284204  585096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:37.286084  585096 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:37.286175  585096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:37.286263  585096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:37.286385  585096 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:37.286443  585096 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:37.286545  585096 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:37.286638  585096 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:37.286729  585096 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:37.286812  585096 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:37.286912  585096 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:37.287010  585096 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:37.287082  585096 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:37.287172  585096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:37.602946  585096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:37.727897  585096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:12:37.932126  585096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:37.989742  585096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:38.036655  585096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:38.037085  585096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:38.040618  585096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:35.182881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.683718  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:38.042238  585096 out.go:235]   - Booting up control plane ...
	I1008 19:12:38.042374  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:38.042568  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:38.043504  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:38.065666  585096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:38.071727  585096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:38.071814  585096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:38.210382  585096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:12:38.210516  585096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:12:39.213697  585096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003319891s
	I1008 19:12:39.213803  585096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:12:43.717718  585096 kubeadm.go:310] [api-check] The API server is healthy after 4.502167036s
	I1008 19:12:43.728628  585096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:12:43.744283  585096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:12:43.775369  585096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:12:43.775621  585096 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-142496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:12:43.788583  585096 kubeadm.go:310] [bootstrap-token] Using token: srsq4v.7le212xun40ljc7w
	I1008 19:12:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:42.183680  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:44.185065  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:43.789834  585096 out.go:235]   - Configuring RBAC rules ...
	I1008 19:12:43.789945  585096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:12:43.796091  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:12:43.807906  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:12:43.811025  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:12:43.814445  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:12:43.817615  585096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:12:44.122839  585096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:12:44.567387  585096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:12:45.122714  585096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:12:45.123480  585096 kubeadm.go:310] 
	I1008 19:12:45.123590  585096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:12:45.123617  585096 kubeadm.go:310] 
	I1008 19:12:45.123740  585096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:12:45.123749  585096 kubeadm.go:310] 
	I1008 19:12:45.123789  585096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:12:45.123870  585096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:12:45.123958  585096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:12:45.123984  585096 kubeadm.go:310] 
	I1008 19:12:45.124064  585096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:12:45.124080  585096 kubeadm.go:310] 
	I1008 19:12:45.124152  585096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:12:45.124162  585096 kubeadm.go:310] 
	I1008 19:12:45.124248  585096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:12:45.124366  585096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:12:45.124456  585096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:12:45.124469  585096 kubeadm.go:310] 
	I1008 19:12:45.124579  585096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:12:45.124682  585096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:12:45.124692  585096 kubeadm.go:310] 
	I1008 19:12:45.124804  585096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.124926  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:12:45.124953  585096 kubeadm.go:310] 	--control-plane 
	I1008 19:12:45.124958  585096 kubeadm.go:310] 
	I1008 19:12:45.125086  585096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:12:45.125093  585096 kubeadm.go:310] 
	I1008 19:12:45.125182  585096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.125321  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:12:45.126852  585096 kubeadm.go:310] W1008 19:12:37.105673    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127231  585096 kubeadm.go:310] W1008 19:12:37.106373    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127380  585096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:12:45.127429  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:12:45.127452  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:12:45.129742  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:12:45.130870  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:12:45.143909  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:12:45.170901  585096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:12:45.170965  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:45.170972  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-142496 minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=default-k8s-diff-port-142496 minikube.k8s.io/primary=true
	I1008 19:12:45.198031  585096 ops.go:34] apiserver oom_adj: -16
	I1008 19:12:45.385789  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.684251  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:49.183225  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:45.886434  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.386165  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.886920  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.386786  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.885835  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.386706  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.885981  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.386856  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.471554  585096 kubeadm.go:1113] duration metric: took 4.300656747s to wait for elevateKubeSystemPrivileges
	I1008 19:12:49.471596  585096 kubeadm.go:394] duration metric: took 5m2.486064826s to StartCluster
	I1008 19:12:49.471627  585096 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.471736  585096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:12:49.473381  585096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.473676  585096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:12:49.473768  585096 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:12:49.473874  585096 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473897  585096 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142496"
	I1008 19:12:49.473899  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:12:49.473904  585096 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473923  585096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142496"
	W1008 19:12:49.473907  585096 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:12:49.473939  585096 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473955  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.473967  585096 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.473981  585096 addons.go:243] addon metrics-server should already be in state true
	I1008 19:12:49.474022  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.474283  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474313  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474338  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474366  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474373  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474405  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.475217  585096 out.go:177] * Verifying Kubernetes components...
	I1008 19:12:49.476402  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:12:49.490880  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1008 19:12:49.491405  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.492070  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.492093  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.492454  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.492990  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.493040  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.493623  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 19:12:49.493646  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1008 19:12:49.494011  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494067  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494548  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494565  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494763  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494790  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494930  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495102  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.495276  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495871  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.495908  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.498744  585096 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.498764  585096 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:12:49.498787  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.499142  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.499173  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.514047  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1008 19:12:49.514527  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.515028  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.515046  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.515493  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.515662  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.516519  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1008 19:12:49.517015  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.517643  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.517661  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.517706  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.517757  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I1008 19:12:49.518133  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.518458  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.518617  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.518643  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.518681  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.519107  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.519527  585096 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:12:49.519808  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.519923  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.520415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.520624  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:12:49.520644  585096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:12:49.520669  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.522226  585096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:12:49.523372  585096 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.523396  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:12:49.523415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.523947  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524437  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.524464  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.524830  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.525042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.525198  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.527349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.527693  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527842  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.528009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.528186  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.528325  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.536509  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I1008 19:12:49.536879  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.537341  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.537359  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.537606  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.537897  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.539570  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.539810  585096 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.539831  585096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:12:49.539848  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.542955  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.543522  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543543  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.543726  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.543888  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.544023  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.721845  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:12:49.741622  585096 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.763968  585096 node_ready.go:49] node "default-k8s-diff-port-142496" has status "Ready":"True"
	I1008 19:12:49.764005  585096 node_ready.go:38] duration metric: took 22.348135ms for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.764019  585096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:49.793150  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:49.867565  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.904041  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.912694  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:12:49.912723  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:12:49.962053  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:12:49.962082  585096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:12:50.004678  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.004709  585096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:12:50.068528  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.394807  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394824  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394836  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.394841  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395140  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395161  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395172  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395181  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395181  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395195  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395201  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.395205  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395262  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395425  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395439  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395616  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395668  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395643  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416509  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.416532  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.416815  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416865  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.416880  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634404  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634428  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634722  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.634744  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634752  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634761  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634769  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635036  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635066  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.635079  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.635100  585096 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-142496"
	I1008 19:12:50.636555  585096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:12:51.683959  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.182376  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:50.637816  585096 addons.go:510] duration metric: took 1.164063633s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:12:51.799881  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.299619  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:56.183179  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683102  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683159  584371 pod_ready.go:82] duration metric: took 4m0.006623922s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:58.683173  584371 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:12:58.683184  584371 pod_ready.go:39] duration metric: took 4m4.541923995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:58.683207  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:12:58.683245  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:58.683296  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:58.729385  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:58.729407  584371 cri.go:89] found id: ""
	I1008 19:12:58.729417  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:12:58.729472  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.734291  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:58.734382  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:58.772015  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:12:58.772050  584371 cri.go:89] found id: ""
	I1008 19:12:58.772062  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:12:58.772123  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.776231  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:58.776300  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:58.812962  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:58.812982  584371 cri.go:89] found id: ""
	I1008 19:12:58.812991  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:12:58.813046  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.816951  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:58.817002  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:58.852918  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:58.852939  584371 cri.go:89] found id: ""
	I1008 19:12:58.852946  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:12:58.852992  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.857184  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:58.857245  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:58.895233  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:12:58.895254  584371 cri.go:89] found id: ""
	I1008 19:12:58.895264  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:12:58.895317  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.899301  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:58.899354  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:58.933918  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:58.933946  584371 cri.go:89] found id: ""
	I1008 19:12:58.933956  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:12:58.934003  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.938274  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:58.938361  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:58.980067  584371 cri.go:89] found id: ""
	I1008 19:12:58.980094  584371 logs.go:282] 0 containers: []
	W1008 19:12:58.980104  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:58.980113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:58.980174  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:59.013783  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:12:59.013812  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.013817  584371 cri.go:89] found id: ""
	I1008 19:12:59.013827  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:12:59.013886  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.018420  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.024462  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:12:59.024486  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.062654  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:12:59.062688  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:59.110932  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:59.110966  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:59.248699  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:12:59.248734  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:59.294439  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:12:59.294473  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:59.331208  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:12:59.331241  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:59.374242  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:12:59.374283  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:56.799487  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.800290  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:59.800320  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.800349  585096 pod_ready.go:82] duration metric: took 10.007162242s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.800361  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804590  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.804609  585096 pod_ready.go:82] duration metric: took 4.240474ms for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804620  585096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808737  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.808754  585096 pod_ready.go:82] duration metric: took 4.127686ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808762  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813126  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.813146  585096 pod_ready.go:82] duration metric: took 4.37796ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813154  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817020  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.817039  585096 pod_ready.go:82] duration metric: took 3.878053ms for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817048  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197958  585096 pod_ready.go:93] pod "kube-proxy-wd5kv" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.197983  585096 pod_ready.go:82] duration metric: took 380.928087ms for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197992  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597495  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.597521  585096 pod_ready.go:82] duration metric: took 399.522182ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597529  585096 pod_ready.go:39] duration metric: took 10.833495765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:13:00.597545  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:13:00.597612  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:00.613266  585096 api_server.go:72] duration metric: took 11.139554705s to wait for apiserver process to appear ...
	I1008 19:13:00.613289  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:00.613308  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:13:00.618420  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:13:00.619376  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:00.619399  585096 api_server.go:131] duration metric: took 6.102941ms to wait for apiserver health ...
	I1008 19:13:00.619407  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:00.800687  585096 system_pods.go:59] 9 kube-system pods found
	I1008 19:13:00.800720  585096 system_pods.go:61] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:00.800729  585096 system_pods.go:61] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:00.800733  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:00.800737  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:00.800740  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:00.800743  585096 system_pods.go:61] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:00.800747  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:00.800752  585096 system_pods.go:61] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:00.800755  585096 system_pods.go:61] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:00.800765  585096 system_pods.go:74] duration metric: took 181.352111ms to wait for pod list to return data ...
	I1008 19:13:00.800773  585096 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:00.997631  585096 default_sa.go:45] found service account: "default"
	I1008 19:13:00.997657  585096 default_sa.go:55] duration metric: took 196.876434ms for default service account to be created ...
	I1008 19:13:00.997667  585096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:01.199366  585096 system_pods.go:86] 9 kube-system pods found
	I1008 19:13:01.199396  585096 system_pods.go:89] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:01.199402  585096 system_pods.go:89] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:01.199406  585096 system_pods.go:89] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:01.199409  585096 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:01.199413  585096 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:01.199416  585096 system_pods.go:89] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:01.199419  585096 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:01.199426  585096 system_pods.go:89] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:01.199430  585096 system_pods.go:89] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:01.199439  585096 system_pods.go:126] duration metric: took 201.766214ms to wait for k8s-apps to be running ...
	I1008 19:13:01.199447  585096 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:01.199492  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:01.214863  585096 system_svc.go:56] duration metric: took 15.401989ms WaitForService to wait for kubelet
	I1008 19:13:01.214895  585096 kubeadm.go:582] duration metric: took 11.741185862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:01.214919  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:01.397506  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:01.397530  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:01.397541  585096 node_conditions.go:105] duration metric: took 182.616774ms to run NodePressure ...
	I1008 19:13:01.397553  585096 start.go:241] waiting for startup goroutines ...
	I1008 19:13:01.397560  585096 start.go:246] waiting for cluster config update ...
	I1008 19:13:01.397570  585096 start.go:255] writing updated cluster config ...
	I1008 19:13:01.397828  585096 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:01.448158  585096 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:01.450201  585096 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142496" cluster and "default" namespace by default
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:59.438777  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:59.438814  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:59.945253  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:59.945302  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:00.016570  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:00.016607  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:00.034150  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:00.034183  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:00.075423  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:00.075456  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:00.111132  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:00.111164  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.646570  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:02.666594  584371 api_server.go:72] duration metric: took 4m13.762192057s to wait for apiserver process to appear ...
	I1008 19:13:02.666620  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:02.666663  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:02.666718  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:02.704214  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:02.704242  584371 cri.go:89] found id: ""
	I1008 19:13:02.704250  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:02.704298  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.708636  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:02.708717  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:02.748418  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:02.748444  584371 cri.go:89] found id: ""
	I1008 19:13:02.748455  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:02.748515  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.753267  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:02.753332  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:02.790534  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:02.790562  584371 cri.go:89] found id: ""
	I1008 19:13:02.790571  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:02.790636  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.794880  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:02.794950  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:02.834754  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:02.834774  584371 cri.go:89] found id: ""
	I1008 19:13:02.834781  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:02.834830  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.839391  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:02.839463  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:02.878344  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:02.878371  584371 cri.go:89] found id: ""
	I1008 19:13:02.878380  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:02.878425  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.882939  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:02.883025  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:02.920081  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:02.920104  584371 cri.go:89] found id: ""
	I1008 19:13:02.920112  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:02.920168  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.924141  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:02.924205  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:02.959700  584371 cri.go:89] found id: ""
	I1008 19:13:02.959730  584371 logs.go:282] 0 containers: []
	W1008 19:13:02.959741  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:02.959750  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:02.959822  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:02.996900  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.996927  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:02.996933  584371 cri.go:89] found id: ""
	I1008 19:13:02.996940  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:02.996989  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.001152  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.005021  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:03.005046  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:03.069775  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:03.069813  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:03.120028  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:03.120060  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:03.155756  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:03.155784  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:03.195587  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:03.195624  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:03.231844  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:03.231875  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:03.271156  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:03.271187  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:03.286994  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:03.287017  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:03.397237  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:03.397269  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:03.442373  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:03.442407  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:03.500191  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:03.500222  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:03.535448  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:03.535490  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:03.966382  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:03.966425  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:06.513885  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:13:06.518111  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:13:06.519310  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:06.519331  584371 api_server.go:131] duration metric: took 3.852704338s to wait for apiserver health ...
	I1008 19:13:06.519341  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:06.519370  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:06.519417  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:06.558940  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:06.558965  584371 cri.go:89] found id: ""
	I1008 19:13:06.558979  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:06.559029  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.563471  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:06.563537  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:06.607844  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:06.607873  584371 cri.go:89] found id: ""
	I1008 19:13:06.607883  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:06.607944  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.612399  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:06.612456  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:06.645502  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:06.645521  584371 cri.go:89] found id: ""
	I1008 19:13:06.645528  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:06.645575  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.649442  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:06.649519  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:06.685085  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:06.685114  584371 cri.go:89] found id: ""
	I1008 19:13:06.685126  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:06.685183  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.689859  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:06.689935  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:06.724775  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:06.724803  584371 cri.go:89] found id: ""
	I1008 19:13:06.724814  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:06.724873  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.729489  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:06.729542  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:06.776599  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:06.776626  584371 cri.go:89] found id: ""
	I1008 19:13:06.776636  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:06.776704  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.780790  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:06.780863  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:06.817072  584371 cri.go:89] found id: ""
	I1008 19:13:06.817097  584371 logs.go:282] 0 containers: []
	W1008 19:13:06.817106  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:06.817113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:06.817171  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:06.855429  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:06.855453  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:06.855457  584371 cri.go:89] found id: ""
	I1008 19:13:06.855465  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:06.855520  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.859774  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.863800  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:06.863821  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:06.931413  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:06.931443  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:06.946213  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:06.946236  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:07.070604  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:07.070640  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:07.114749  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:07.114782  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:07.152555  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:07.152584  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:07.192730  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:07.192759  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:07.242001  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:07.242036  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:07.612662  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:07.612714  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:07.656655  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:07.656700  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:07.695462  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:07.695494  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:07.733107  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:07.733143  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:07.779348  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:07.779382  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:10.325584  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:13:10.325616  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.325620  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.325624  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.325628  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.325631  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.325634  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.325639  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.325644  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.325651  584371 system_pods.go:74] duration metric: took 3.806304739s to wait for pod list to return data ...
	I1008 19:13:10.325659  584371 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:10.328062  584371 default_sa.go:45] found service account: "default"
	I1008 19:13:10.328082  584371 default_sa.go:55] duration metric: took 2.41797ms for default service account to be created ...
	I1008 19:13:10.328089  584371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:10.332201  584371 system_pods.go:86] 8 kube-system pods found
	I1008 19:13:10.332224  584371 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.332229  584371 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.332233  584371 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.332237  584371 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.332241  584371 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.332245  584371 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.332250  584371 system_pods.go:89] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.332254  584371 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.332261  584371 system_pods.go:126] duration metric: took 4.167739ms to wait for k8s-apps to be running ...
	I1008 19:13:10.332270  584371 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:10.332313  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:10.350257  584371 system_svc.go:56] duration metric: took 17.979349ms WaitForService to wait for kubelet
	I1008 19:13:10.350288  584371 kubeadm.go:582] duration metric: took 4m21.445892386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:10.350310  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:10.352582  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:10.352598  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:10.352609  584371 node_conditions.go:105] duration metric: took 2.294326ms to run NodePressure ...
	I1008 19:13:10.352620  584371 start.go:241] waiting for startup goroutines ...
	I1008 19:13:10.352626  584371 start.go:246] waiting for cluster config update ...
	I1008 19:13:10.352636  584371 start.go:255] writing updated cluster config ...
	I1008 19:13:10.352882  584371 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:10.401998  584371 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:10.404037  584371 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 
	
	
	==> CRI-O <==
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.372177025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415332372154126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5757adcc-f948-4f17-92d1-93dede2dad00 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.372743204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1278716b-c73f-4487-a593-59f2ee18b888 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.372796602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1278716b-c73f-4487-a593-59f2ee18b888 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.373034091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1278716b-c73f-4487-a593-59f2ee18b888 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.414890666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b129f67c-b493-4c03-95f0-51cd433d2835 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.414980365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b129f67c-b493-4c03-95f0-51cd433d2835 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.416267547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9ce5914-4590-44c2-b132-594dea497617 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.416586254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415332416566148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9ce5914-4590-44c2-b132-594dea497617 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.417280345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad85d444-534c-4da7-813b-b7f3ce596bf8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.417332346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad85d444-534c-4da7-813b-b7f3ce596bf8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.417530052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad85d444-534c-4da7-813b-b7f3ce596bf8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.456957107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=021c7fd4-b1e1-4f63-b050-abf3a2b94ba5 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.457028699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=021c7fd4-b1e1-4f63-b050-abf3a2b94ba5 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.458042412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44fc77c0-8cc9-4c98-a085-e735d30f9f0b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.458359467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415332458336388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44fc77c0-8cc9-4c98-a085-e735d30f9f0b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.458959735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82939bd0-7cbd-4c4e-b5e4-31e96ae489e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.459010987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82939bd0-7cbd-4c4e-b5e4-31e96ae489e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.459234921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82939bd0-7cbd-4c4e-b5e4-31e96ae489e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.491145764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8cf6f41-a300-4e92-9df3-6eeb9ec3f11e name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.491254334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8cf6f41-a300-4e92-9df3-6eeb9ec3f11e name=/runtime.v1.RuntimeService/Version
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.492334158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d5cda33-759c-48ca-8aae-2ca606fb4129 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.492735568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415332492713050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d5cda33-759c-48ca-8aae-2ca606fb4129 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.493447920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ac824da-8e23-4cf9-b9ab-ea323be5ca1f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.493495915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ac824da-8e23-4cf9-b9ab-ea323be5ca1f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:22:12 no-preload-966632 crio[709]: time="2024-10-08 19:22:12.493711411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ac824da-8e23-4cf9-b9ab-ea323be5ca1f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f17c106378228       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   812d98aede592       storage-provisioner
	0b49d58582dbc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e45a8291cc38a       busybox
	09475152f3f1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   62108e9cc22e8       coredns-7c65d6cfc9-r8qft
	f1591b11958e9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   33ad6c744ea88       kube-proxy-qpnvm
	035c2e708170e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   812d98aede592       storage-provisioner
	c8765b4e849e7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   6ba661acb123f       etcd-no-preload-966632
	51e1de45365e8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   56baf3c2256dd       kube-scheduler-no-preload-966632
	d97350daf0186       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   b3b9e56a2c0f1       kube-controller-manager-no-preload-966632
	ebd3d4cf59214       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   119d64c7893bb       kube-apiserver-no-preload-966632
	
	
	==> coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50679 - 29674 "HINFO IN 8047378031698476006.6929136164188044077. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.413715271s
	
	
	==> describe nodes <==
	Name:               no-preload-966632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-966632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=no-preload-966632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T18_59_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:59:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-966632
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 19:22:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 19:19:26 +0000   Tue, 08 Oct 2024 18:59:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 19:19:26 +0000   Tue, 08 Oct 2024 18:59:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 19:19:26 +0000   Tue, 08 Oct 2024 18:59:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 19:19:26 +0000   Tue, 08 Oct 2024 19:08:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.141
	  Hostname:    no-preload-966632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c76d7b5eb04f4388b86b4ad08c01e70a
	  System UUID:                c76d7b5e-b04f-4388-b86b-4ad08c01e70a
	  Boot ID:                    d5cdc3b8-6cce-4afd-835b-744b3f08d692
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-r8qft                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-966632                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-966632             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-966632    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-qpnvm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-966632             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-rlt25              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-966632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-966632 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-966632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-966632 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-966632 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-966632 event: Registered Node no-preload-966632 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-966632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-966632 event: Registered Node no-preload-966632 in Controller
	
	
	==> dmesg <==
	[Oct 8 19:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062470] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043210] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.193173] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.466095] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606649] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.562209] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.055041] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069134] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.166523] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.142607] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.250072] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[ +15.370221] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.058162] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.733968] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +4.950518] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.545853] systemd-fstab-generator[1989]: Ignoring "noauto" option for root device
	[  +0.540103] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.287449] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] <==
	{"level":"info","ts":"2024-10-08T19:08:40.918420Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98daa217e16821c9","local-member-id":"41850776257dba86","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:08:40.916730Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"41850776257dba86","initial-advertise-peer-urls":["https://192.168.61.141:2380"],"listen-peer-urls":["https://192.168.61.141:2380"],"advertise-client-urls":["https://192.168.61.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-08T19:08:40.916755Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-08T19:08:40.916939Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.141:2380"}
	{"level":"info","ts":"2024-10-08T19:08:40.919914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:08:40.920140Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.141:2380"}
	{"level":"info","ts":"2024-10-08T19:08:42.470697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-08T19:08:42.470809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-08T19:08:42.470982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 received MsgPreVoteResp from 41850776257dba86 at term 2"}
	{"level":"info","ts":"2024-10-08T19:08:42.471025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 became candidate at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.471050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 received MsgVoteResp from 41850776257dba86 at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.471077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 became leader at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.471120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 41850776257dba86 elected leader 41850776257dba86 at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.482087Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T19:08:42.482177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T19:08:42.482219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T19:08:42.481920Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"41850776257dba86","local-member-attributes":"{Name:no-preload-966632 ClientURLs:[https://192.168.61.141:2379]}","request-path":"/0/members/41850776257dba86/attributes","cluster-id":"98daa217e16821c9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T19:08:42.482986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T19:08:42.483460Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:08:42.483941Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:08:42.484767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.141:2379"}
	{"level":"info","ts":"2024-10-08T19:08:42.485132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T19:18:42.516237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":866}
	{"level":"info","ts":"2024-10-08T19:18:42.525755Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":866,"took":"9.17137ms","hash":2850192731,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2617344,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-08T19:18:42.525823Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2850192731,"revision":866,"compact-revision":-1}
	
	
	==> kernel <==
	 19:22:12 up 14 min,  0 users,  load average: 0.05, 0.16, 0.16
	Linux no-preload-966632 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] <==
	W1008 19:18:44.797355       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:18:44.797454       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:18:44.798479       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:18:44.798556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:19:44.799568       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:19:44.799644       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1008 19:19:44.799800       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:19:44.800005       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1008 19:19:44.800792       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:19:44.801917       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:21:44.801736       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:21:44.802131       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1008 19:21:44.802193       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:21:44.802237       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1008 19:21:44.803901       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:21:44.803965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] <==
	E1008 19:16:49.340823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:16:49.960311       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:17:19.346924       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:17:19.967635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:17:49.353920       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:17:49.975828       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:18:19.360720       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:18:19.983237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:18:49.368650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:18:49.991555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:19:19.375229       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:19:20.002096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:19:26.743799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-966632"
	I1008 19:19:36.727175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="221.224µs"
	E1008 19:19:49.381407       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:19:50.009413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:19:50.729998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="200.962µs"
	E1008 19:20:19.387974       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:20:20.016613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:20:49.394956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:20:50.024641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:21:19.403745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:21:20.031644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:21:49.409809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:21:50.038710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 19:08:45.253655       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 19:08:45.263023       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.141"]
	E1008 19:08:45.263089       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 19:08:45.296803       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 19:08:45.296903       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 19:08:45.296922       1 server_linux.go:169] "Using iptables Proxier"
	I1008 19:08:45.299297       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 19:08:45.299509       1 server.go:483] "Version info" version="v1.31.1"
	I1008 19:08:45.299539       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:08:45.301341       1 config.go:199] "Starting service config controller"
	I1008 19:08:45.301374       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 19:08:45.301404       1 config.go:105] "Starting endpoint slice config controller"
	I1008 19:08:45.301424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 19:08:45.303578       1 config.go:328] "Starting node config controller"
	I1008 19:08:45.303686       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 19:08:45.401542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 19:08:45.401653       1 shared_informer.go:320] Caches are synced for service config
	I1008 19:08:45.403819       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] <==
	I1008 19:08:41.711974       1 serving.go:386] Generated self-signed cert in-memory
	W1008 19:08:43.753949       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 19:08:43.754145       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 19:08:43.754232       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 19:08:43.754258       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 19:08:43.838289       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1008 19:08:43.838389       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:08:43.847549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 19:08:43.848079       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 19:08:43.849921       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 19:08:43.849980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 19:08:43.951049       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 19:21:04 no-preload-966632 kubelet[1365]: E1008 19:21:04.708922    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:21:09 no-preload-966632 kubelet[1365]: E1008 19:21:09.909053    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415269908379502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:09 no-preload-966632 kubelet[1365]: E1008 19:21:09.909444    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415269908379502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:18 no-preload-966632 kubelet[1365]: E1008 19:21:18.709510    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:21:19 no-preload-966632 kubelet[1365]: E1008 19:21:19.911315    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415279911084613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:19 no-preload-966632 kubelet[1365]: E1008 19:21:19.911355    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415279911084613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:29 no-preload-966632 kubelet[1365]: E1008 19:21:29.912618    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415289912229437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:29 no-preload-966632 kubelet[1365]: E1008 19:21:29.912954    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415289912229437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:32 no-preload-966632 kubelet[1365]: E1008 19:21:32.708559    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]: E1008 19:21:39.722592    1365 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]: E1008 19:21:39.914876    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415299914481804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:39 no-preload-966632 kubelet[1365]: E1008 19:21:39.914947    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415299914481804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:46 no-preload-966632 kubelet[1365]: E1008 19:21:46.708810    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:21:49 no-preload-966632 kubelet[1365]: E1008 19:21:49.917258    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415309916306129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:49 no-preload-966632 kubelet[1365]: E1008 19:21:49.917302    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415309916306129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:57 no-preload-966632 kubelet[1365]: E1008 19:21:57.709061    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:21:59 no-preload-966632 kubelet[1365]: E1008 19:21:59.918180    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415319917956605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:21:59 no-preload-966632 kubelet[1365]: E1008 19:21:59.918218    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415319917956605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:22:09 no-preload-966632 kubelet[1365]: E1008 19:22:09.919547    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415329919234465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:22:09 no-preload-966632 kubelet[1365]: E1008 19:22:09.919595    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415329919234465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:22:11 no-preload-966632 kubelet[1365]: E1008 19:22:11.713376    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	
	
	==> storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] <==
	I1008 19:08:45.149384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 19:09:15.152591       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] <==
	I1008 19:09:16.015381       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 19:09:16.025202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 19:09:16.025316       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 19:09:33.428710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 19:09:33.429221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-966632_da8eff02-06a8-4eff-bbc9-8851223a9e34!
	I1008 19:09:33.429305       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5bd04e67-ee9b-4a46-933b-412c58b00453", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-966632_da8eff02-06a8-4eff-bbc9-8851223a9e34 became leader
	I1008 19:09:33.530199       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-966632_da8eff02-06a8-4eff-bbc9-8851223a9e34!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966632 -n no-preload-966632
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-966632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rlt25
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-966632 describe pod metrics-server-6867b74b74-rlt25
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-966632 describe pod metrics-server-6867b74b74-rlt25: exit status 1 (68.893644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rlt25" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-966632 describe pod metrics-server-6867b74b74-rlt25: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:16:38.895563  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:18:54.839087  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:20:51.764886  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:21:38.895902  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:24:41.967298  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (243.643225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-256554" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (223.096878ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-256554 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-256554 logs -n 25: (1.521445696s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-038693 sudo                            | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                 | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:04:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:20.270613  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:23.342576  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:04:29.422638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:32.494606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:38.574600  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:41.646592  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:47.726606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:50.798595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:56.878669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:59.950607  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:06.030583  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:09.102584  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:15.182571  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:18.254590  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:24.334638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:27.406606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:33.486619  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:36.558552  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:42.638565  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:45.710610  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:51.790561  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:54.862591  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:00.942606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:04.014669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:10.094618  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:13.166598  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:19.246573  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:22.318595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:28.398732  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:31.470685  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:37.550574  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:40.622614  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:46.702620  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:49.774581  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:55.854627  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:58.926568  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:07:01.929445  585014 start.go:364] duration metric: took 3m15.782086174s to acquireMachinesLock for "embed-certs-783146"
	I1008 19:07:01.929517  585014 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:01.929523  585014 fix.go:54] fixHost starting: 
	I1008 19:07:01.929889  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:01.929945  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:01.945409  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 19:07:01.945858  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:01.946357  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:01.946387  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:01.946744  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:01.946895  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:01.947028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:01.948399  585014 fix.go:112] recreateIfNeeded on embed-certs-783146: state=Stopped err=<nil>
	I1008 19:07:01.948419  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	W1008 19:07:01.948545  585014 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:01.954020  585014 out.go:177] * Restarting existing kvm2 VM for "embed-certs-783146" ...
	I1008 19:07:01.926825  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:01.926871  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927219  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:07:01.927270  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927475  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:07:01.929278  584371 machine.go:96] duration metric: took 4m37.425232924s to provisionDockerMachine
	I1008 19:07:01.929341  584371 fix.go:56] duration metric: took 4m37.445578307s for fixHost
	I1008 19:07:01.929349  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 4m37.445609603s
	W1008 19:07:01.929369  584371 start.go:714] error starting host: provision: host is not running
	W1008 19:07:01.929510  584371 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1008 19:07:01.929524  584371 start.go:729] Will try again in 5 seconds ...
	I1008 19:07:01.955309  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Start
	I1008 19:07:01.955452  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 19:07:01.956122  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 19:07:01.956432  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 19:07:01.956743  585014 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 19:07:01.957427  585014 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 19:07:03.159229  585014 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 19:07:03.160116  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.160503  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.160565  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.160497  585935 retry.go:31] will retry after 282.873854ms: waiting for machine to come up
	I1008 19:07:03.445297  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.445810  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.445838  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.445740  585935 retry.go:31] will retry after 344.936527ms: waiting for machine to come up
	I1008 19:07:03.792413  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.792802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.792837  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.792741  585935 retry.go:31] will retry after 414.968289ms: waiting for machine to come up
	I1008 19:07:04.209200  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.209532  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.209555  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.209502  585935 retry.go:31] will retry after 403.180416ms: waiting for machine to come up
	I1008 19:07:04.614156  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.614679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.614713  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.614636  585935 retry.go:31] will retry after 631.841511ms: waiting for machine to come up
	I1008 19:07:05.248574  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.248983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.249015  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.248917  585935 retry.go:31] will retry after 639.776909ms: waiting for machine to come up
	I1008 19:07:05.890868  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.891332  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.891406  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.891329  585935 retry.go:31] will retry after 764.489176ms: waiting for machine to come up
	I1008 19:07:06.931497  584371 start.go:360] acquireMachinesLock for no-preload-966632: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:06.657130  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:06.657520  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:06.657550  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:06.657462  585935 retry.go:31] will retry after 1.348973281s: waiting for machine to come up
	I1008 19:07:08.008293  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:08.008779  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:08.008805  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:08.008740  585935 retry.go:31] will retry after 1.146283289s: waiting for machine to come up
	I1008 19:07:09.157106  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:09.157517  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:09.157546  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:09.157493  585935 retry.go:31] will retry after 1.510430686s: waiting for machine to come up
	I1008 19:07:10.669393  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:10.669802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:10.669831  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:10.669749  585935 retry.go:31] will retry after 2.380864418s: waiting for machine to come up
	I1008 19:07:13.053078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:13.053487  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:13.053512  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:13.053427  585935 retry.go:31] will retry after 2.553865951s: waiting for machine to come up
	I1008 19:07:15.610098  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:15.610501  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:15.610535  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:15.610428  585935 retry.go:31] will retry after 4.018444789s: waiting for machine to come up
	I1008 19:07:20.967039  585096 start.go:364] duration metric: took 3m30.476693248s to acquireMachinesLock for "default-k8s-diff-port-142496"
	I1008 19:07:20.967105  585096 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:20.967115  585096 fix.go:54] fixHost starting: 
	I1008 19:07:20.967619  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:20.967675  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:20.984936  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1008 19:07:20.985358  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:20.985869  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:07:20.985896  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:20.986199  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:20.986380  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:20.986520  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:07:20.987828  585096 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142496: state=Stopped err=<nil>
	I1008 19:07:20.987867  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	W1008 19:07:20.988020  585096 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:20.990029  585096 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142496" ...
	I1008 19:07:19.632076  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632468  585014 main.go:141] libmachine: (embed-certs-783146) Found IP for machine: 192.168.72.183
	I1008 19:07:19.632504  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has current primary IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632511  585014 main.go:141] libmachine: (embed-certs-783146) Reserving static IP address...
	I1008 19:07:19.632968  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.633020  585014 main.go:141] libmachine: (embed-certs-783146) DBG | skip adding static IP to network mk-embed-certs-783146 - found existing host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"}
	I1008 19:07:19.633041  585014 main.go:141] libmachine: (embed-certs-783146) Reserved static IP address: 192.168.72.183
	I1008 19:07:19.633062  585014 main.go:141] libmachine: (embed-certs-783146) Waiting for SSH to be available...
	I1008 19:07:19.633073  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Getting to WaitForSSH function...
	I1008 19:07:19.634939  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635221  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.635249  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635415  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH client type: external
	I1008 19:07:19.635453  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa (-rw-------)
	I1008 19:07:19.635496  585014 main.go:141] libmachine: (embed-certs-783146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:19.635509  585014 main.go:141] libmachine: (embed-certs-783146) DBG | About to run SSH command:
	I1008 19:07:19.635522  585014 main.go:141] libmachine: (embed-certs-783146) DBG | exit 0
	I1008 19:07:19.758276  585014 main.go:141] libmachine: (embed-certs-783146) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:19.758658  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 19:07:19.759310  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:19.761990  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.762456  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762803  585014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:07:19.763012  585014 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:19.763034  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:19.763271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.765523  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765829  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.765858  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765988  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.766159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766289  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766433  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.766589  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.766877  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.766891  585014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:19.866272  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:19.866297  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866563  585014 buildroot.go:166] provisioning hostname "embed-certs-783146"
	I1008 19:07:19.866585  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866799  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.869295  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869648  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.869679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869836  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.870017  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870153  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870293  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.870444  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.870621  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.870636  585014 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-783146 && echo "embed-certs-783146" | sudo tee /etc/hostname
	I1008 19:07:19.983892  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-783146
	
	I1008 19:07:19.983925  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.986430  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986776  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.986806  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986922  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.987104  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.987588  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.987746  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.987762  585014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-783146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-783146/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-783146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:20.095178  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:20.095212  585014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:20.095264  585014 buildroot.go:174] setting up certificates
	I1008 19:07:20.095276  585014 provision.go:84] configureAuth start
	I1008 19:07:20.095288  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:20.095578  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.098000  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098431  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.098459  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098591  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.100935  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101241  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.101271  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101393  585014 provision.go:143] copyHostCerts
	I1008 19:07:20.101452  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:20.101463  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:20.101544  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:20.101807  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:20.101824  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:20.101873  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:20.102015  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:20.102029  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:20.102075  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:20.102152  585014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-783146 san=[127.0.0.1 192.168.72.183 embed-certs-783146 localhost minikube]
	I1008 19:07:20.378020  585014 provision.go:177] copyRemoteCerts
	I1008 19:07:20.378093  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:20.378133  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.380678  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381017  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.381050  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381175  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.381386  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.381579  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.381717  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.464627  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:20.487853  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:07:20.510174  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:07:20.532381  585014 provision.go:87] duration metric: took 437.094502ms to configureAuth
	I1008 19:07:20.532405  585014 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:20.532571  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:20.532669  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.535064  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.535382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.535753  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.535920  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.536039  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.536193  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.536406  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.536429  585014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:20.745937  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:20.745967  585014 machine.go:96] duration metric: took 982.940955ms to provisionDockerMachine
	I1008 19:07:20.745980  585014 start.go:293] postStartSetup for "embed-certs-783146" (driver="kvm2")
	I1008 19:07:20.745994  585014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:20.746012  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.746380  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:20.746417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.749056  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749395  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.749425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749566  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.749738  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.749852  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.749943  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.828580  585014 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:20.832894  585014 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:20.832923  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:20.832994  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:20.833069  585014 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:20.833162  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:20.842230  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:20.864957  585014 start.go:296] duration metric: took 118.964089ms for postStartSetup
	I1008 19:07:20.865006  585014 fix.go:56] duration metric: took 18.93548189s for fixHost
	I1008 19:07:20.865029  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.867709  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868089  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.868113  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868223  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.868425  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868583  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868742  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.868926  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.869159  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.869175  585014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:20.966898  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414440.940275348
	
	I1008 19:07:20.966919  585014 fix.go:216] guest clock: 1728414440.940275348
	I1008 19:07:20.966926  585014 fix.go:229] Guest: 2024-10-08 19:07:20.940275348 +0000 UTC Remote: 2024-10-08 19:07:20.865011917 +0000 UTC m=+214.857488447 (delta=75.263431ms)
	I1008 19:07:20.966948  585014 fix.go:200] guest clock delta is within tolerance: 75.263431ms
	I1008 19:07:20.966953  585014 start.go:83] releasing machines lock for "embed-certs-783146", held for 19.037463535s
	I1008 19:07:20.966979  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.967246  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.969983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.970386  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970586  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971061  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971243  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971340  585014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:20.971382  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.971487  585014 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:20.971515  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.974211  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974581  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974632  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.974695  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974872  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.974999  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.975024  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.975028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975184  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975228  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.975374  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975501  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.975559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975709  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:21.072152  585014 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:21.078116  585014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:21.221176  585014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:21.227359  585014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:21.227434  585014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:21.242691  585014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:21.242716  585014 start.go:495] detecting cgroup driver to use...
	I1008 19:07:21.242796  585014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:21.257429  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:21.270208  585014 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:21.270258  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:21.282826  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:21.295827  585014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:21.405804  585014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:21.572147  585014 docker.go:233] disabling docker service ...
	I1008 19:07:21.572231  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:21.586083  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:21.598657  585014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:21.722224  585014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:21.853317  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:21.867234  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:21.884872  585014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:21.884949  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.895154  585014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:21.895223  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.905371  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.915602  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.926026  585014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:21.938089  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.949261  585014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.966211  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.978120  585014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:21.987631  585014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:21.987693  585014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:22.002185  585014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:22.013111  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:22.135933  585014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:22.230256  585014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:22.230342  585014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:22.235005  585014 start.go:563] Will wait 60s for crictl version
	I1008 19:07:22.235076  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:07:22.238991  585014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:22.279302  585014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:22.279391  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.308343  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.337272  585014 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:20.991759  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Start
	I1008 19:07:20.991997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring networks are active...
	I1008 19:07:20.992703  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network default is active
	I1008 19:07:20.993057  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network mk-default-k8s-diff-port-142496 is active
	I1008 19:07:20.993435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Getting domain xml...
	I1008 19:07:20.994209  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Creating domain...
	I1008 19:07:22.240185  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting to get IP...
	I1008 19:07:22.240949  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241417  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241469  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.241382  586083 retry.go:31] will retry after 234.248435ms: waiting for machine to come up
	I1008 19:07:22.476800  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477343  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477375  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.477275  586083 retry.go:31] will retry after 323.851452ms: waiting for machine to come up
	I1008 19:07:22.802997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803574  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803610  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.803516  586083 retry.go:31] will retry after 445.299956ms: waiting for machine to come up
	I1008 19:07:23.250211  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250686  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250715  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.250651  586083 retry.go:31] will retry after 574.786836ms: waiting for machine to come up
	I1008 19:07:23.827535  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828010  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.827959  586083 retry.go:31] will retry after 563.165045ms: waiting for machine to come up
	I1008 19:07:24.393150  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393741  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393792  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.393717  586083 retry.go:31] will retry after 576.443855ms: waiting for machine to come up
	I1008 19:07:24.971698  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972132  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972161  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.972090  586083 retry.go:31] will retry after 999.17904ms: waiting for machine to come up
	I1008 19:07:22.338812  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:22.341998  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:22.342417  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342680  585014 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:22.346863  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:22.359456  585014 kubeadm.go:883] updating cluster {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:22.359630  585014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:22.359692  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:22.394832  585014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:22.394893  585014 ssh_runner.go:195] Run: which lz4
	I1008 19:07:22.398935  585014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:22.403100  585014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:22.403127  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:23.771685  585014 crio.go:462] duration metric: took 1.372780034s to copy over tarball
	I1008 19:07:23.771769  585014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:25.816508  585014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044704362s)
	I1008 19:07:25.816547  585014 crio.go:469] duration metric: took 2.04482777s to extract the tarball
	I1008 19:07:25.816557  585014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:25.852980  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:25.893366  585014 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:25.893391  585014 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:25.893399  585014 kubeadm.go:934] updating node { 192.168.72.183 8443 v1.31.1 crio true true} ...
	I1008 19:07:25.893517  585014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-783146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:25.893579  585014 ssh_runner.go:195] Run: crio config
	I1008 19:07:25.934828  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:25.934850  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:25.934874  585014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:25.934906  585014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.183 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-783146 NodeName:embed-certs-783146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:25.935039  585014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-783146"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:25.935106  585014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:25.944851  585014 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:25.944919  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:25.954022  585014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1008 19:07:25.979675  585014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:26.001147  585014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1008 19:07:26.017613  585014 ssh_runner.go:195] Run: grep 192.168.72.183	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:26.021401  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:26.033347  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:25.972405  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972868  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972891  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:25.972831  586083 retry.go:31] will retry after 1.186801161s: waiting for machine to come up
	I1008 19:07:27.161319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161877  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161900  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:27.161823  586083 retry.go:31] will retry after 1.448383195s: waiting for machine to come up
	I1008 19:07:28.611319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611697  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:28.611613  586083 retry.go:31] will retry after 1.738948191s: waiting for machine to come up
	I1008 19:07:30.352081  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352582  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352617  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:30.352530  586083 retry.go:31] will retry after 2.624799898s: waiting for machine to come up
	I1008 19:07:26.138298  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:26.154419  585014 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146 for IP: 192.168.72.183
	I1008 19:07:26.154447  585014 certs.go:194] generating shared ca certs ...
	I1008 19:07:26.154470  585014 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:26.154651  585014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:26.154714  585014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:26.154729  585014 certs.go:256] generating profile certs ...
	I1008 19:07:26.154860  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/client.key
	I1008 19:07:26.154948  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key.b07aac04
	I1008 19:07:26.155003  585014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key
	I1008 19:07:26.155159  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:26.155202  585014 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:26.155212  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:26.155232  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:26.155256  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:26.155280  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:26.155319  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:26.156076  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:26.187225  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:26.235804  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:26.268034  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:26.292729  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 19:07:26.320118  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:26.351058  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:26.374004  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:07:26.396526  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:26.419067  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:26.441449  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:26.463768  585014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:26.479471  585014 ssh_runner.go:195] Run: openssl version
	I1008 19:07:26.484957  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:26.495286  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501166  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501225  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.507154  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:26.517587  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:26.528157  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532896  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532967  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.540724  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:26.554952  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:26.567160  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571304  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571394  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.576974  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:26.587198  585014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:26.591621  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:26.597176  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:26.602766  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:26.608373  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:26.613797  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:26.619310  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:26.624702  585014 kubeadm.go:392] StartCluster: {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:26.624831  585014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:26.624878  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.666183  585014 cri.go:89] found id: ""
	I1008 19:07:26.666253  585014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:26.676621  585014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:26.676644  585014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:26.676699  585014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:26.686549  585014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:26.687532  585014 kubeconfig.go:125] found "embed-certs-783146" server: "https://192.168.72.183:8443"
	I1008 19:07:26.689545  585014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:26.698758  585014 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.183
	I1008 19:07:26.698790  585014 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:26.698804  585014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:26.698856  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.738148  585014 cri.go:89] found id: ""
	I1008 19:07:26.738209  585014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:26.753980  585014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:26.763186  585014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:26.763208  585014 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:26.763257  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:07:26.771789  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:26.771847  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:26.780812  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:07:26.789329  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:26.789390  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:26.798230  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.806781  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:26.806842  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.815549  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:07:26.823782  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:26.823830  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:26.832698  585014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:26.841687  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:26.945569  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.159232  585014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213619978s)
	I1008 19:07:28.159280  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.372727  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.456082  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.567486  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:28.567627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.067909  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.568466  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.068627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.567821  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.604366  585014 api_server.go:72] duration metric: took 2.036885191s to wait for apiserver process to appear ...
	I1008 19:07:30.604403  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:30.604440  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.461223  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.461270  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.461286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.499425  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.499473  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.604563  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.614594  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:33.614625  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.105286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.111706  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:34.111747  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.605326  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.612912  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:07:34.619204  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:34.619227  585014 api_server.go:131] duration metric: took 4.014816798s to wait for apiserver health ...
	I1008 19:07:34.619236  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:34.619242  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:34.621043  585014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:32.980593  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981141  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981171  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:32.981076  586083 retry.go:31] will retry after 3.401015855s: waiting for machine to come up
	I1008 19:07:34.622500  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:34.632627  585014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:34.654975  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:34.667824  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:34.667853  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:34.667863  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:34.667874  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:34.667879  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:34.667884  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:34.667890  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:34.667899  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:34.667904  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:34.667910  585014 system_pods.go:74] duration metric: took 12.913884ms to wait for pod list to return data ...
	I1008 19:07:34.667919  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:34.672996  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:34.673018  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:34.673029  585014 node_conditions.go:105] duration metric: took 5.105827ms to run NodePressure ...
	I1008 19:07:34.673045  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:34.992309  585014 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996835  585014 kubeadm.go:739] kubelet initialised
	I1008 19:07:34.996861  585014 kubeadm.go:740] duration metric: took 4.524726ms waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996870  585014 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:35.005255  585014 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.012539  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012568  585014 pod_ready.go:82] duration metric: took 7.278613ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.012580  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012589  585014 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.018465  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018489  585014 pod_ready.go:82] duration metric: took 5.8848ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.018500  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018509  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.026503  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026533  585014 pod_ready.go:82] duration metric: took 8.012156ms for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.026544  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026555  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.058419  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058449  585014 pod_ready.go:82] duration metric: took 31.879605ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.058463  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058471  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.458244  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458275  585014 pod_ready.go:82] duration metric: took 399.794285ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.458286  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458292  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.858567  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858612  585014 pod_ready.go:82] duration metric: took 400.312425ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.858625  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858637  585014 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:36.258490  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258520  585014 pod_ready.go:82] duration metric: took 399.870797ms for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:36.258530  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258538  585014 pod_ready.go:39] duration metric: took 1.261659261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:36.258558  585014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:07:36.269993  585014 ops.go:34] apiserver oom_adj: -16
	I1008 19:07:36.270016  585014 kubeadm.go:597] duration metric: took 9.593365367s to restartPrimaryControlPlane
	I1008 19:07:36.270025  585014 kubeadm.go:394] duration metric: took 9.645330227s to StartCluster
	I1008 19:07:36.270044  585014 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.270125  585014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:07:36.271682  585014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.271945  585014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:07:36.272024  585014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:07:36.272130  585014 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-783146"
	I1008 19:07:36.272158  585014 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-783146"
	W1008 19:07:36.272166  585014 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:07:36.272152  585014 addons.go:69] Setting default-storageclass=true in profile "embed-certs-783146"
	I1008 19:07:36.272179  585014 addons.go:69] Setting metrics-server=true in profile "embed-certs-783146"
	I1008 19:07:36.272198  585014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-783146"
	I1008 19:07:36.272203  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272213  585014 addons.go:234] Setting addon metrics-server=true in "embed-certs-783146"
	W1008 19:07:36.272224  585014 addons.go:243] addon metrics-server should already be in state true
	I1008 19:07:36.272256  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272187  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:36.272616  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272638  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272658  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272689  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272694  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272738  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.274263  585014 out.go:177] * Verifying Kubernetes components...
	I1008 19:07:36.275444  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:36.288219  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1008 19:07:36.288686  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.289297  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.289328  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.289721  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.290415  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.290462  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.293043  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1008 19:07:36.293374  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I1008 19:07:36.293461  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293721  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293954  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.293978  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294188  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.294212  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294299  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294504  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.294534  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294982  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.295028  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.297638  585014 addons.go:234] Setting addon default-storageclass=true in "embed-certs-783146"
	W1008 19:07:36.297661  585014 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:07:36.297692  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.298042  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.298081  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.309286  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1008 19:07:36.309776  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310024  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1008 19:07:36.310337  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310360  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.310478  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310771  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.310980  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310997  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.311013  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.311330  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.311500  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.313004  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1008 19:07:36.313159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313368  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.313523  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313926  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.313951  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.314284  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.314777  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.314820  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.314992  585014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:07:36.315010  585014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:07:36.316168  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:07:36.316191  585014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:07:36.316212  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.316309  585014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.316333  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:07:36.316352  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.320088  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320418  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320566  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320591  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320733  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.320888  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320912  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320931  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321074  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.321181  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321235  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321400  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321397  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.321532  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.331532  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1008 19:07:36.331881  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.332309  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.332331  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.332724  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.332929  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.334589  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.334775  585014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.334797  585014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:07:36.334811  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.337675  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.338093  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338209  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.338380  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.338491  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.338600  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.444532  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:36.462719  585014 node_ready.go:35] waiting up to 6m0s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:36.519485  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.613714  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:07:36.613738  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:07:36.637773  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.645883  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:07:36.645907  585014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:07:36.685924  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.685952  585014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:07:36.710461  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.970231  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970256  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970563  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970589  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970599  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970606  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970860  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970881  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970892  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980520  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.980538  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.980826  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980869  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.980888  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.676577  585014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038767196s)
	I1008 19:07:37.676633  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.676646  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.676972  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.676982  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677040  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677058  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.677075  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.677333  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677351  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677375  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689600  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689615  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.689883  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.689897  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689901  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.689917  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689934  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.690210  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.690227  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.690240  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.690256  585014 addons.go:475] Verifying addon metrics-server=true in "embed-certs-783146"
	I1008 19:07:37.692035  585014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1008 19:07:36.383659  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.383993  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.384026  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:36.383939  586083 retry.go:31] will retry after 3.325274435s: waiting for machine to come up
	I1008 19:07:39.713420  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.713902  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Found IP for machine: 192.168.50.213
	I1008 19:07:39.713926  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserving static IP address...
	I1008 19:07:39.713945  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has current primary IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.714332  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.714362  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserved static IP address: 192.168.50.213
	I1008 19:07:39.714382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | skip adding static IP to network mk-default-k8s-diff-port-142496 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"}
	I1008 19:07:39.714401  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Getting to WaitForSSH function...
	I1008 19:07:39.714415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for SSH to be available...
	I1008 19:07:39.716542  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.716905  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.716951  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.717025  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH client type: external
	I1008 19:07:39.717052  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa (-rw-------)
	I1008 19:07:39.717111  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:39.717147  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | About to run SSH command:
	I1008 19:07:39.717165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | exit 0
	I1008 19:07:39.842089  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:39.842499  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetConfigRaw
	I1008 19:07:39.843125  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:39.845604  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.845976  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.846008  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.846276  585096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:07:39.846509  585096 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:39.846541  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:39.846768  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.849107  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849411  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.849435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849743  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.849924  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850084  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850236  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.850422  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.850679  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.850695  585096 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:39.950481  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:39.950507  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.950796  585096 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142496"
	I1008 19:07:39.950825  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.951016  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.953300  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.953678  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953833  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.954002  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954168  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954297  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.954450  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.954621  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.954636  585096 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142496 && echo "default-k8s-diff-port-142496" | sudo tee /etc/hostname
	I1008 19:07:40.068848  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142496
	
	I1008 19:07:40.068876  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.071855  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072195  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.072226  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072392  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.072563  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072746  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072871  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.073039  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.073237  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.073257  585096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:40.183039  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:40.183073  585096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:40.183116  585096 buildroot.go:174] setting up certificates
	I1008 19:07:40.183131  585096 provision.go:84] configureAuth start
	I1008 19:07:40.183146  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:40.183451  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:40.185904  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186264  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.186284  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186453  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.188672  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.189037  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189134  585096 provision.go:143] copyHostCerts
	I1008 19:07:40.189204  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:40.189217  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:40.189281  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:40.189427  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:40.189441  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:40.189474  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:40.189563  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:40.189573  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:40.189600  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:40.189679  585096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142496 san=[127.0.0.1 192.168.50.213 default-k8s-diff-port-142496 localhost minikube]
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:37.693230  585014 addons.go:510] duration metric: took 1.421218857s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1008 19:07:38.466754  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:40.967492  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:40.418969  585096 provision.go:177] copyRemoteCerts
	I1008 19:07:40.419032  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:40.419060  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.421382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421701  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.421730  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421912  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.422108  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.422287  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.422426  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.500533  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:40.524199  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 19:07:40.547495  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:07:40.570656  585096 provision.go:87] duration metric: took 387.509086ms to configureAuth
	I1008 19:07:40.570687  585096 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:40.570859  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:40.570934  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.573578  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.573941  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.573970  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.574088  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.574290  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574534  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574680  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.574881  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.575056  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.575074  585096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:40.795575  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:40.795604  585096 machine.go:96] duration metric: took 949.073836ms to provisionDockerMachine
	I1008 19:07:40.795618  585096 start.go:293] postStartSetup for "default-k8s-diff-port-142496" (driver="kvm2")
	I1008 19:07:40.795629  585096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:40.795646  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:40.796003  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:40.796042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.798307  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798635  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.798666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798881  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.799039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.799249  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.799369  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.880470  585096 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:40.884632  585096 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:40.884660  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:40.884719  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:40.884834  585096 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:40.884947  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:40.893828  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:40.917278  585096 start.go:296] duration metric: took 121.644332ms for postStartSetup
	I1008 19:07:40.917320  585096 fix.go:56] duration metric: took 19.950206082s for fixHost
	I1008 19:07:40.917342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.919971  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920315  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.920342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920539  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.920782  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.920969  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.921114  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.921292  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.921519  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.921535  585096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:41.022573  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414460.977520721
	
	I1008 19:07:41.022596  585096 fix.go:216] guest clock: 1728414460.977520721
	I1008 19:07:41.022603  585096 fix.go:229] Guest: 2024-10-08 19:07:40.977520721 +0000 UTC Remote: 2024-10-08 19:07:40.917324605 +0000 UTC m=+230.557951471 (delta=60.196116ms)
	I1008 19:07:41.022627  585096 fix.go:200] guest clock delta is within tolerance: 60.196116ms
	I1008 19:07:41.022634  585096 start.go:83] releasing machines lock for "default-k8s-diff-port-142496", held for 20.055558507s
	I1008 19:07:41.022665  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.022896  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:41.025861  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026272  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.026301  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026479  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027126  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027537  585096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:41.027581  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.027725  585096 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:41.027749  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.030474  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.030745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031094  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031123  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031148  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031322  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031511  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031572  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031827  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.031883  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.135368  585096 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:41.141492  585096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:41.288617  585096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:41.295482  585096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:41.295550  585096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:41.310709  585096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:41.310738  585096 start.go:495] detecting cgroup driver to use...
	I1008 19:07:41.310821  585096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:41.328574  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:41.342506  585096 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:41.342564  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:41.356308  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:41.372510  585096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:41.497084  585096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:41.665187  585096 docker.go:233] disabling docker service ...
	I1008 19:07:41.665272  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:41.682309  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:41.702567  585096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:41.882727  585096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:42.006479  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:42.020474  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:42.039750  585096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:42.039834  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.050395  585096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:42.050449  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.060572  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.071974  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.083208  585096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:42.097166  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.110090  585096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.128424  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.139296  585096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:42.148278  585096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:42.148320  585096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:42.164007  585096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:42.173218  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:42.303890  585096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:42.412074  585096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:42.412155  585096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:42.418606  585096 start.go:563] Will wait 60s for crictl version
	I1008 19:07:42.418662  585096 ssh_runner.go:195] Run: which crictl
	I1008 19:07:42.422670  585096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:42.469322  585096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:42.469432  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.501089  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.530412  585096 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:42.531554  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:42.534587  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.534928  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:42.534968  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.535235  585096 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:42.539279  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:42.552259  585096 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:42.552380  585096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:42.552447  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:42.588849  585096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:42.588928  585096 ssh_runner.go:195] Run: which lz4
	I1008 19:07:42.592785  585096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:42.597089  585096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:42.597119  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:44.003959  585096 crio.go:462] duration metric: took 1.411213503s to copy over tarball
	I1008 19:07:44.004075  585096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:43.467315  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:43.975147  585014 node_ready.go:49] node "embed-certs-783146" has status "Ready":"True"
	I1008 19:07:43.975180  585014 node_ready.go:38] duration metric: took 7.512429362s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:43.975194  585014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:43.982537  585014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999539  585014 pod_ready.go:93] pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:43.999566  585014 pod_ready.go:82] duration metric: took 16.995034ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999578  585014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506007  585014 pod_ready.go:93] pod "etcd-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:44.506032  585014 pod_ready.go:82] duration metric: took 506.447262ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506043  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.159478  585096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155358377s)
	I1008 19:07:46.159512  585096 crio.go:469] duration metric: took 2.155494994s to extract the tarball
	I1008 19:07:46.159532  585096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:46.196073  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:46.239224  585096 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:46.239253  585096 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:46.239263  585096 kubeadm.go:934] updating node { 192.168.50.213 8444 v1.31.1 crio true true} ...
	I1008 19:07:46.239412  585096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:46.239482  585096 ssh_runner.go:195] Run: crio config
	I1008 19:07:46.284916  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:46.284941  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:46.284959  585096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:46.284980  585096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142496 NodeName:default-k8s-diff-port-142496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:46.285145  585096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142496"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:46.285218  585096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:46.295176  585096 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:46.295278  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:46.304340  585096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1008 19:07:46.320234  585096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:46.336215  585096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 19:07:46.352435  585096 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:46.355991  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:46.367424  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:46.491070  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:46.509165  585096 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496 for IP: 192.168.50.213
	I1008 19:07:46.509192  585096 certs.go:194] generating shared ca certs ...
	I1008 19:07:46.509213  585096 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:46.509413  585096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:46.509488  585096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:46.509507  585096 certs.go:256] generating profile certs ...
	I1008 19:07:46.509642  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/client.key
	I1008 19:07:46.509724  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key.8b79a92b
	I1008 19:07:46.509806  585096 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key
	I1008 19:07:46.510014  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:46.510069  585096 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:46.510082  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:46.510109  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:46.510154  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:46.510177  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:46.510220  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:46.510965  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:46.548979  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:46.588042  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:46.617201  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:46.645499  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 19:07:46.673075  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:46.705336  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:46.727739  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:07:46.755352  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:46.782421  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:46.804813  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:46.827321  585096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:46.843375  585096 ssh_runner.go:195] Run: openssl version
	I1008 19:07:46.848936  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:46.860851  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865320  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865379  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.871107  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:46.881518  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:46.891868  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.895991  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.896026  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.901219  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:46.914282  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:46.925095  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929407  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929465  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.934778  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:46.946807  585096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:46.951173  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:46.957072  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:46.962822  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:46.968584  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:46.974679  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:46.980081  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:46.985537  585096 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:46.985659  585096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:46.985706  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.025838  585096 cri.go:89] found id: ""
	I1008 19:07:47.025924  585096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:47.037778  585096 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:47.037800  585096 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:47.037847  585096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:47.049787  585096 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:47.050778  585096 kubeconfig.go:125] found "default-k8s-diff-port-142496" server: "https://192.168.50.213:8444"
	I1008 19:07:47.052921  585096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:47.062696  585096 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I1008 19:07:47.062747  585096 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:47.062775  585096 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:47.062822  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.101981  585096 cri.go:89] found id: ""
	I1008 19:07:47.102054  585096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:47.119421  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:47.129168  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:47.129189  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:47.129253  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:07:47.138071  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:47.138125  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:47.147202  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:07:47.155923  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:47.155979  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:47.164829  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.173366  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:47.173413  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.182417  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:07:47.191170  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:47.191228  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:47.200115  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:47.209146  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:47.314572  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.318198  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003546788s)
	I1008 19:07:48.318245  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.533505  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.617977  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.743670  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:48.743782  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.244765  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.744287  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.243920  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:46.513648  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:49.013409  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:50.422334  585014 pod_ready.go:93] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.422364  585014 pod_ready.go:82] duration metric: took 5.916314463s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.422379  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929739  585014 pod_ready.go:93] pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.929775  585014 pod_ready.go:82] duration metric: took 507.386631ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929790  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935612  585014 pod_ready.go:93] pod "kube-proxy-9l7t7" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.935638  585014 pod_ready.go:82] duration metric: took 5.84081ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935650  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941106  585014 pod_ready.go:93] pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.941131  585014 pod_ready.go:82] duration metric: took 5.47259ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941143  585014 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:50.744524  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.830124  585096 api_server.go:72] duration metric: took 2.086461343s to wait for apiserver process to appear ...
	I1008 19:07:50.830161  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:50.830192  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:50.830915  585096 api_server.go:269] stopped: https://192.168.50.213:8444/healthz: Get "https://192.168.50.213:8444/healthz": dial tcp 192.168.50.213:8444: connect: connection refused
	I1008 19:07:51.331031  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.027442  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.027468  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.027483  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.101043  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.101073  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.330385  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.335009  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.335035  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:54.830407  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.835912  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.835939  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:55.330454  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:55.336271  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:07:55.343556  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:55.343586  585096 api_server.go:131] duration metric: took 4.513416619s to wait for apiserver health ...
	I1008 19:07:55.343604  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:55.343612  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:55.345259  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:55.346612  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:55.357899  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:55.383903  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:52.948407  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:55.449059  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:07:55.397903  585096 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:55.397935  585096 system_pods.go:61] "coredns-7c65d6cfc9-tkg8j" [0b436a1f-2b8e-4a5f-8063-695480275f2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:55.397944  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [cc702ae5-7e74-4a18-942e-1d236d39c43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:55.397952  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [da72d2f3-aab5-42c3-9733-7c0ce470e61e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:55.397959  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [de964717-b4de-4c7c-a9b5-164e7a048d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:55.397966  585096 system_pods.go:61] "kube-proxy-lwggr" [d5d96599-c3d3-4eba-a2ad-0c027e8ef1ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:55.397971  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [9218d69d-97ca-4680-856b-95c43fa371ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:55.397976  585096 system_pods.go:61] "metrics-server-6867b74b74-pfc2c" [9bafd6da-a33e-4182-a0d7-5e4c9473f057] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:55.397982  585096 system_pods.go:61] "storage-provisioner" [b60980ab-2552-404e-b351-4b163a075732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:55.397988  585096 system_pods.go:74] duration metric: took 14.056648ms to wait for pod list to return data ...
	I1008 19:07:55.397997  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:55.403870  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:55.403906  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:55.403920  585096 node_conditions.go:105] duration metric: took 5.917994ms to run NodePressure ...
	I1008 19:07:55.403941  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:55.677555  585096 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682514  585096 kubeadm.go:739] kubelet initialised
	I1008 19:07:55.682539  585096 kubeadm.go:740] duration metric: took 4.953783ms waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682550  585096 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:55.688641  585096 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:57.695361  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.195582  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:57.948167  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.446946  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.834691  584371 start.go:364] duration metric: took 54.903126319s to acquireMachinesLock for "no-preload-966632"
	I1008 19:08:01.834745  584371 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:08:01.834753  584371 fix.go:54] fixHost starting: 
	I1008 19:08:01.835158  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:01.835200  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:01.854850  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1008 19:08:01.855220  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:01.855740  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:01.855770  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:01.856201  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:01.856428  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:01.856587  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:01.857921  584371 fix.go:112] recreateIfNeeded on no-preload-966632: state=Stopped err=<nil>
	I1008 19:08:01.857943  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	W1008 19:08:01.858110  584371 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:08:01.859994  584371 out.go:177] * Restarting existing kvm2 VM for "no-preload-966632" ...
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:01.861355  584371 main.go:141] libmachine: (no-preload-966632) Calling .Start
	I1008 19:08:01.861539  584371 main.go:141] libmachine: (no-preload-966632) Ensuring networks are active...
	I1008 19:08:01.862455  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network default is active
	I1008 19:08:01.862878  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network mk-no-preload-966632 is active
	I1008 19:08:01.863368  584371 main.go:141] libmachine: (no-preload-966632) Getting domain xml...
	I1008 19:08:01.864106  584371 main.go:141] libmachine: (no-preload-966632) Creating domain...
	I1008 19:08:03.179854  584371 main.go:141] libmachine: (no-preload-966632) Waiting to get IP...
	I1008 19:08:03.180838  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.181232  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.181301  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.181206  586496 retry.go:31] will retry after 229.567854ms: waiting for machine to come up
	I1008 19:08:03.412710  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.413201  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.413225  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.413170  586496 retry.go:31] will retry after 361.675143ms: waiting for machine to come up
	I1008 19:08:03.776466  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.777140  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.777184  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.777047  586496 retry.go:31] will retry after 323.194852ms: waiting for machine to come up
	I1008 19:08:04.101865  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.102357  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.102388  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.102310  586496 retry.go:31] will retry after 484.995282ms: waiting for machine to come up
	I1008 19:08:02.698935  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:05.195930  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:02.447582  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:04.450889  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:04.589063  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.589752  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.589780  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.589706  586496 retry.go:31] will retry after 543.703113ms: waiting for machine to come up
	I1008 19:08:05.135522  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.135997  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.136023  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.135944  586496 retry.go:31] will retry after 617.479763ms: waiting for machine to come up
	I1008 19:08:05.754978  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.755541  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.755568  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.755486  586496 retry.go:31] will retry after 849.017716ms: waiting for machine to come up
	I1008 19:08:06.606621  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:06.607072  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:06.607105  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:06.607023  586496 retry.go:31] will retry after 1.133489837s: waiting for machine to come up
	I1008 19:08:07.742713  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:07.743299  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:07.743329  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:07.743252  586496 retry.go:31] will retry after 1.797316795s: waiting for machine to come up
	I1008 19:08:07.196317  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.698409  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.698443  585096 pod_ready.go:82] duration metric: took 12.009772792s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.698475  585096 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.708991  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.709015  585096 pod_ready.go:82] duration metric: took 10.527401ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.709028  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714343  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.714369  585096 pod_ready.go:82] duration metric: took 5.331417ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714383  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.118973  585096 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:06.948829  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:09.448376  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:09.541879  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:09.542340  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:09.542372  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:09.542288  586496 retry.go:31] will retry after 2.238590286s: waiting for machine to come up
	I1008 19:08:11.783440  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:11.783909  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:11.783945  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:11.783858  586496 retry.go:31] will retry after 2.226110801s: waiting for machine to come up
	I1008 19:08:14.012103  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:14.012538  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:14.012561  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:14.012493  586496 retry.go:31] will retry after 2.298206633s: waiting for machine to come up
	I1008 19:08:10.849833  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.849856  585096 pod_ready.go:82] duration metric: took 3.13546554s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.849868  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858341  585096 pod_ready.go:93] pod "kube-proxy-lwggr" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.858367  585096 pod_ready.go:82] duration metric: took 8.492572ms for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858379  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865890  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.865909  585096 pod_ready.go:82] duration metric: took 7.521945ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865918  585096 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:12.873861  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:15.372408  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.450482  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:13.948331  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.313868  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:16.314460  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:16.314484  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:16.314424  586496 retry.go:31] will retry after 3.672085858s: waiting for machine to come up
	I1008 19:08:17.872689  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.372637  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.448090  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:18.947580  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.948804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.989014  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989556  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has current primary IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989576  584371 main.go:141] libmachine: (no-preload-966632) Found IP for machine: 192.168.61.141
	I1008 19:08:19.989589  584371 main.go:141] libmachine: (no-preload-966632) Reserving static IP address...
	I1008 19:08:19.990000  584371 main.go:141] libmachine: (no-preload-966632) Reserved static IP address: 192.168.61.141
	I1008 19:08:19.990036  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.990048  584371 main.go:141] libmachine: (no-preload-966632) Waiting for SSH to be available...
	I1008 19:08:19.990068  584371 main.go:141] libmachine: (no-preload-966632) DBG | skip adding static IP to network mk-no-preload-966632 - found existing host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"}
	I1008 19:08:19.990076  584371 main.go:141] libmachine: (no-preload-966632) DBG | Getting to WaitForSSH function...
	I1008 19:08:19.992644  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.992970  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.993010  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.993081  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH client type: external
	I1008 19:08:19.993104  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa (-rw-------)
	I1008 19:08:19.993136  584371 main.go:141] libmachine: (no-preload-966632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:19.993152  584371 main.go:141] libmachine: (no-preload-966632) DBG | About to run SSH command:
	I1008 19:08:19.993174  584371 main.go:141] libmachine: (no-preload-966632) DBG | exit 0
	I1008 19:08:20.118205  584371 main.go:141] libmachine: (no-preload-966632) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:20.118616  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetConfigRaw
	I1008 19:08:20.119326  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.122203  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122678  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.122708  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122926  584371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 19:08:20.123144  584371 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:20.123164  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:20.123360  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.125759  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126083  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.126108  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126265  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.126442  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.126980  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.127189  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.127201  584371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:20.234458  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:20.234491  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.234781  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:08:20.234811  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.235044  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.237673  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.237993  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.238016  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.238221  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.238418  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238612  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238806  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.238981  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.239176  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.239203  584371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-966632 && echo "no-preload-966632" | sudo tee /etc/hostname
	I1008 19:08:20.360621  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-966632
	
	I1008 19:08:20.360649  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.363600  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.363909  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.363947  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.364166  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.364297  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364426  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364510  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.364630  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.364855  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.364881  584371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-966632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-966632/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-966632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:20.483101  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:20.483131  584371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:20.483149  584371 buildroot.go:174] setting up certificates
	I1008 19:08:20.483161  584371 provision.go:84] configureAuth start
	I1008 19:08:20.483171  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.483429  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.486467  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.486838  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.486871  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.487037  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.489207  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489531  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.489557  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489655  584371 provision.go:143] copyHostCerts
	I1008 19:08:20.489726  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:20.489737  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:20.489803  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:20.489927  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:20.489939  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:20.489987  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:20.490072  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:20.490083  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:20.490110  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:20.490231  584371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.no-preload-966632 san=[127.0.0.1 192.168.61.141 localhost minikube no-preload-966632]
	I1008 19:08:20.618050  584371 provision.go:177] copyRemoteCerts
	I1008 19:08:20.618117  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:20.618149  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.621118  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621458  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.621485  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621670  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.621875  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.622056  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.622224  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:20.704439  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:20.730441  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:08:20.755072  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:08:20.777513  584371 provision.go:87] duration metric: took 294.340685ms to configureAuth
	I1008 19:08:20.777550  584371 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:20.777774  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:08:20.777873  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.780540  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.780956  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.780995  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.781185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.781423  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781615  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.781989  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.782179  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.782203  584371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:21.003896  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:21.003925  584371 machine.go:96] duration metric: took 880.766243ms to provisionDockerMachine
	I1008 19:08:21.003940  584371 start.go:293] postStartSetup for "no-preload-966632" (driver="kvm2")
	I1008 19:08:21.003955  584371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:21.003974  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.004286  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:21.004312  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.007138  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007472  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.007500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007610  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.007820  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.007991  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.008163  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.093075  584371 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:21.097048  584371 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:21.097076  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:21.097160  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:21.097254  584371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:21.097370  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:21.106698  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:21.130484  584371 start.go:296] duration metric: took 126.530716ms for postStartSetup
	I1008 19:08:21.130526  584371 fix.go:56] duration metric: took 19.295774496s for fixHost
	I1008 19:08:21.130550  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.133361  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.133717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.133744  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.134048  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.134269  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134525  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134710  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.134888  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:21.135119  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:21.135135  584371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:21.242740  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414501.194174379
	
	I1008 19:08:21.242765  584371 fix.go:216] guest clock: 1728414501.194174379
	I1008 19:08:21.242776  584371 fix.go:229] Guest: 2024-10-08 19:08:21.194174379 +0000 UTC Remote: 2024-10-08 19:08:21.130530022 +0000 UTC m=+356.786912807 (delta=63.644357ms)
	I1008 19:08:21.242823  584371 fix.go:200] guest clock delta is within tolerance: 63.644357ms
	I1008 19:08:21.242835  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 19.408108613s
	I1008 19:08:21.242857  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.243112  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:21.245967  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246378  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.246409  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246731  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247314  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247500  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247588  584371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:21.247640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.247706  584371 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:21.247731  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.250191  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250228  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250665  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250694  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250729  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250789  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250948  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250962  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251129  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251314  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.251334  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251462  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.353600  584371 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:21.360031  584371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:21.502001  584371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:21.508846  584371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:21.508938  584371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:21.524597  584371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:21.524626  584371 start.go:495] detecting cgroup driver to use...
	I1008 19:08:21.524699  584371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:21.541500  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:21.553886  584371 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:21.553943  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:21.567027  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:21.579965  584371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:21.692823  584371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:21.844393  584371 docker.go:233] disabling docker service ...
	I1008 19:08:21.844461  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:21.860471  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:21.873229  584371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:22.003106  584371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:22.129301  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:22.143314  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:22.161423  584371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:08:22.161494  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.171355  584371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:22.171429  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.180962  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.190212  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.199737  584371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:22.209488  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.219051  584371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.235430  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.245007  584371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:22.253705  584371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:22.253748  584371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:22.265343  584371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:22.275245  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:22.380960  584371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:22.471004  584371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:22.471067  584371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:22.475520  584371 start.go:563] Will wait 60s for crictl version
	I1008 19:08:22.475598  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.479271  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:22.523709  584371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:22.523787  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.551307  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.579271  584371 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:08:22.580608  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:22.583417  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583783  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:22.583825  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583991  584371 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:22.587937  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:22.600324  584371 kubeadm.go:883] updating cluster {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:22.600465  584371 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:08:22.600506  584371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:22.641111  584371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:08:22.641139  584371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:22.641194  584371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.641224  584371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.641284  584371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.641307  584371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.641377  584371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.641407  584371 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 19:08:22.641742  584371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642057  584371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.642568  584371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.642576  584371 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 19:08:22.642669  584371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.642876  584371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.642894  584371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.643310  584371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.799972  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.811504  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.815340  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.815659  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.817303  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.858380  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.864688  584371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1008 19:08:22.864727  584371 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.864762  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.877332  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1008 19:08:22.934971  584371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1008 19:08:22.935035  584371 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.935085  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945549  584371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1008 19:08:22.945594  584371 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.945644  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945645  584371 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1008 19:08:22.945683  584371 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.945685  584371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1008 19:08:22.945730  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945733  584371 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.945796  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981887  584371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1008 19:08:22.982012  584371 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.982059  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981954  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.082208  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.082210  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.082304  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.082411  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.082430  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.082543  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.178344  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.196633  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.196665  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.196733  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.209763  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.209830  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.310142  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.317659  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.317731  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.327221  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.331490  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.346298  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 19:08:23.346412  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.435656  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 19:08:23.435679  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1008 19:08:23.435783  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:23.435788  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:23.441591  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 19:08:23.441673  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:23.441696  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 19:08:23.441782  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 19:08:23.441814  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:23.441856  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:23.441901  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1008 19:08:23.441918  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.441947  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.445597  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1008 19:08:23.445630  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1008 19:08:23.449022  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1008 19:08:23.450009  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.373452  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:24.872600  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:23.448074  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:25.449287  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.950280  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.508431356s)
	I1008 19:08:25.950340  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.508402491s)
	I1008 19:08:25.950344  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1008 19:08:25.950357  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1008 19:08:25.950545  584371 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.50050623s)
	I1008 19:08:25.950600  584371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1008 19:08:25.950611  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.508516442s)
	I1008 19:08:25.950637  584371 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:25.950648  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1008 19:08:25.950680  584371 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:25.950688  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:25.950727  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:29.225357  584371 ssh_runner.go:235] Completed: which crictl: (3.274648192s)
	I1008 19:08:29.225514  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:29.225532  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.27477814s)
	I1008 19:08:29.225561  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1008 19:08:29.225593  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:29.225627  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:27.373617  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.374173  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:27.948313  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.948750  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.696201  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.470655089s)
	I1008 19:08:30.696255  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.470604601s)
	I1008 19:08:30.696284  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1008 19:08:30.696296  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:30.696317  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.696365  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.740520  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:32.685896  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.989500601s)
	I1008 19:08:32.685941  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1008 19:08:32.685971  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.685971  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.945412846s)
	I1008 19:08:32.686046  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.686045  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 19:08:32.686186  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:31.872718  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:33.873665  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:32.447765  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:34.948257  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.663874  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.977781248s)
	I1008 19:08:34.663914  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1008 19:08:34.663939  584371 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:34.663942  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.977724244s)
	I1008 19:08:34.663973  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1008 19:08:34.663991  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:36.833283  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.169263327s)
	I1008 19:08:36.833320  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1008 19:08:36.833353  584371 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:36.833417  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:37.485901  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 19:08:37.485954  584371 cache_images.go:123] Successfully loaded all cached images
	I1008 19:08:37.485961  584371 cache_images.go:92] duration metric: took 14.844810749s to LoadCachedImages
	I1008 19:08:37.485973  584371 kubeadm.go:934] updating node { 192.168.61.141 8443 v1.31.1 crio true true} ...
	I1008 19:08:37.486084  584371 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-966632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:37.486149  584371 ssh_runner.go:195] Run: crio config
	I1008 19:08:37.544511  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:37.544535  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:37.544554  584371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:37.544576  584371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-966632 NodeName:no-preload-966632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:08:37.544718  584371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-966632"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:37.544792  584371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:08:37.556979  584371 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:37.557049  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:37.566249  584371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 19:08:37.583303  584371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:37.599535  584371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1008 19:08:37.616315  584371 ssh_runner.go:195] Run: grep 192.168.61.141	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:37.620089  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:37.632181  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:37.748647  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:37.765577  584371 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632 for IP: 192.168.61.141
	I1008 19:08:37.765600  584371 certs.go:194] generating shared ca certs ...
	I1008 19:08:37.765619  584371 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:37.765829  584371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:37.765890  584371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:37.765904  584371 certs.go:256] generating profile certs ...
	I1008 19:08:37.766020  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.key
	I1008 19:08:37.766095  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key.a515ed11
	I1008 19:08:37.766143  584371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key
	I1008 19:08:37.766334  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:37.766383  584371 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:37.766398  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:37.766430  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:37.766467  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:37.766501  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:37.766562  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:37.767588  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:37.804400  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:37.837466  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:37.865516  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:37.894827  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:08:37.918668  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:08:37.948238  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:37.974152  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:08:37.997284  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:38.019295  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:38.043392  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:38.067971  584371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:38.084940  584371 ssh_runner.go:195] Run: openssl version
	I1008 19:08:38.090779  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:38.102715  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107292  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107355  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.113456  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:38.123904  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:38.134337  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138503  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138561  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.143902  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:38.155393  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:38.167107  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171433  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171480  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.176968  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:38.188437  584371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:38.192733  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:38.198531  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:38.204187  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:38.210522  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:38.216328  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:38.222077  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:38.227724  584371 kubeadm.go:392] StartCluster: {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:38.227802  584371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:38.227882  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.262461  584371 cri.go:89] found id: ""
	I1008 19:08:38.262532  584371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:38.272591  584371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:38.272612  584371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:38.272677  584371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:38.282621  584371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:38.283683  584371 kubeconfig.go:125] found "no-preload-966632" server: "https://192.168.61.141:8443"
	I1008 19:08:38.286019  584371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:38.295315  584371 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.141
	I1008 19:08:38.295344  584371 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:38.295357  584371 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:38.295400  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.329462  584371 cri.go:89] found id: ""
	I1008 19:08:38.329533  584371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:38.345901  584371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:38.354899  584371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:38.354920  584371 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:38.354965  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:38.363242  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:38.363282  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:38.373063  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:38.381479  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:38.381530  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:38.390679  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.400033  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:38.400071  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.409308  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:38.417842  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:38.417876  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:38.427251  584371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:38.437010  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:38.562381  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.344247  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:36.372911  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:38.872768  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:37.448043  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:39.956579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.550458  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.619345  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.718016  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:39.718126  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.218974  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.719108  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.741178  584371 api_server.go:72] duration metric: took 1.023163924s to wait for apiserver process to appear ...
	I1008 19:08:40.741210  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:08:40.741235  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:40.741767  584371 api_server.go:269] stopped: https://192.168.61.141:8443/healthz: Get "https://192.168.61.141:8443/healthz": dial tcp 192.168.61.141:8443: connect: connection refused
	I1008 19:08:41.241356  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.787235  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:08:43.787284  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:08:43.787306  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.914606  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:43.914653  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:44.242033  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.247068  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.247097  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:40.873394  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:43.373475  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:42.446900  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:44.447141  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.742212  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.756340  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.756371  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.241997  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.246343  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.246367  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.741898  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.749274  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.749301  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.241889  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.246127  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.246155  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.741694  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.746192  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.746219  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:47.242250  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:47.246571  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:08:47.252812  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:08:47.252843  584371 api_server.go:131] duration metric: took 6.511626175s to wait for apiserver health ...
	I1008 19:08:47.252852  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:47.252858  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:47.254723  584371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:08:47.255933  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:08:47.266073  584371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:08:47.284042  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:08:47.293401  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:08:47.293432  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:08:47.293439  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:08:47.293450  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:08:47.293456  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:08:47.293464  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:08:47.293469  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:08:47.293474  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:08:47.293478  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:08:47.293484  584371 system_pods.go:74] duration metric: took 9.422158ms to wait for pod list to return data ...
	I1008 19:08:47.293493  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:08:47.296923  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:08:47.296947  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:08:47.296960  584371 node_conditions.go:105] duration metric: took 3.462212ms to run NodePressure ...
	I1008 19:08:47.296979  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:47.562271  584371 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566914  584371 kubeadm.go:739] kubelet initialised
	I1008 19:08:47.566938  584371 kubeadm.go:740] duration metric: took 4.63692ms waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566950  584371 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:47.571271  584371 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.575633  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575659  584371 pod_ready.go:82] duration metric: took 4.364181ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.575671  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575680  584371 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.579443  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579465  584371 pod_ready.go:82] duration metric: took 3.775248ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.579475  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579483  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.583747  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583775  584371 pod_ready.go:82] duration metric: took 4.277306ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.583785  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583797  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.687618  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687652  584371 pod_ready.go:82] duration metric: took 103.843425ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.687663  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687669  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.087568  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087601  584371 pod_ready.go:82] duration metric: took 399.92202ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.087613  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087622  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.487223  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487256  584371 pod_ready.go:82] duration metric: took 399.625038ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.487269  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487278  584371 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.887764  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887798  584371 pod_ready.go:82] duration metric: took 400.504473ms for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.887812  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887821  584371 pod_ready.go:39] duration metric: took 1.320859293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:48.887842  584371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:08:48.901255  584371 ops.go:34] apiserver oom_adj: -16
	I1008 19:08:48.901279  584371 kubeadm.go:597] duration metric: took 10.628659432s to restartPrimaryControlPlane
	I1008 19:08:48.901290  584371 kubeadm.go:394] duration metric: took 10.673572592s to StartCluster
	I1008 19:08:48.901313  584371 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.901397  584371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:48.904024  584371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.904361  584371 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:08:48.904455  584371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:08:48.904549  584371 addons.go:69] Setting storage-provisioner=true in profile "no-preload-966632"
	I1008 19:08:48.904565  584371 addons.go:69] Setting default-storageclass=true in profile "no-preload-966632"
	I1008 19:08:48.904594  584371 addons.go:234] Setting addon storage-provisioner=true in "no-preload-966632"
	W1008 19:08:48.904603  584371 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:08:48.904603  584371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-966632"
	I1008 19:08:48.904574  584371 addons.go:69] Setting metrics-server=true in profile "no-preload-966632"
	I1008 19:08:48.904646  584371 addons.go:234] Setting addon metrics-server=true in "no-preload-966632"
	I1008 19:08:48.904651  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.904652  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1008 19:08:48.904670  584371 addons.go:243] addon metrics-server should already be in state true
	I1008 19:08:48.904705  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.905079  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905116  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905133  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905151  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905159  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905205  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.906774  584371 out.go:177] * Verifying Kubernetes components...
	I1008 19:08:48.908138  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:48.942865  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1008 19:08:48.943612  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.944201  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.944232  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.944667  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.944748  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1008 19:08:48.945485  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.945526  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.945763  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.946464  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.946484  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.946530  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1008 19:08:48.946935  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.947052  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.947649  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.947693  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.948006  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.948027  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.948379  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.948602  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.951770  584371 addons.go:234] Setting addon default-storageclass=true in "no-preload-966632"
	W1008 19:08:48.951788  584371 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:08:48.951819  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.952055  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.952095  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.962422  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1008 19:08:48.962931  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.963509  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.963532  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.963908  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.964117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.965879  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.967812  584371 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:08:48.967853  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1008 19:08:48.967817  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1008 19:08:48.968376  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968436  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968885  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.968906  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.968964  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:08:48.968986  584371 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:08:48.969010  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.969290  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.969449  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.969472  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.969910  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.969941  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.970187  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.970430  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.972100  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972523  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.972544  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972677  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.972735  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.973016  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.973191  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.973323  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.974390  584371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:48.975651  584371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:48.975670  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:08:48.975686  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.978500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.978855  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.978876  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.979079  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.979474  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.979640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.979766  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.994846  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1008 19:08:48.995180  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.995592  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.995607  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.995976  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.996173  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.998270  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.998549  584371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:48.998568  584371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:08:48.998591  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:49.000647  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.000908  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:49.000924  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.001078  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:49.001185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:49.001282  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:49.001358  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:49.118217  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:49.138077  584371 node_ready.go:35] waiting up to 6m0s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:49.217300  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:49.241237  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:49.365395  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:08:49.365420  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:08:45.873500  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.373215  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:49.403596  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:08:49.403625  584371 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:08:49.438480  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:49.438540  584371 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:08:49.464366  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:50.474783  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.233506833s)
	I1008 19:08:50.474850  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474862  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.474914  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.257567473s)
	I1008 19:08:50.474955  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474964  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475191  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475206  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475215  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475221  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475280  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475289  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475297  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475303  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475310  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475441  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475454  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475582  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475596  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475628  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482003  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.482031  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.482315  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.482351  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482372  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.512902  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.048483922s)
	I1008 19:08:50.512957  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.512980  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513241  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513257  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513261  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513299  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.513307  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513534  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513552  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513561  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513577  584371 addons.go:475] Verifying addon metrics-server=true in "no-preload-966632"
	I1008 19:08:50.515302  584371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:08:46.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.448332  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:50.449239  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.516457  584371 addons.go:510] duration metric: took 1.612011936s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:08:51.141437  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:53.142166  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:54.141208  584371 node_ready.go:49] node "no-preload-966632" has status "Ready":"True"
	I1008 19:08:54.141238  584371 node_ready.go:38] duration metric: took 5.003121669s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:54.141251  584371 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:54.146685  584371 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151059  584371 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:54.151078  584371 pod_ready.go:82] duration metric: took 4.369406ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151086  584371 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:50.872416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:53.372230  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:52.947461  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:54.950183  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.157153  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.157458  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.658595  584371 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.658617  584371 pod_ready.go:82] duration metric: took 4.507524391s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.658627  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663785  584371 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.663811  584371 pod_ready.go:82] duration metric: took 5.176586ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663823  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668310  584371 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.668342  584371 pod_ready.go:82] duration metric: took 4.509914ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668356  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672380  584371 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.672397  584371 pod_ready.go:82] duration metric: took 4.034104ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672405  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676499  584371 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.676517  584371 pod_ready.go:82] duration metric: took 4.106343ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676527  584371 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:55.873069  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.372424  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:57.448182  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:59.947932  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.682583  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.682958  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:00.872650  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.872783  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:05.371825  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.447340  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:04.447504  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.683354  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.183362  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.183636  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.872083  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.874058  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.947502  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:08.948054  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.683833  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.188267  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:12.371967  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.372404  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.448009  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:13.948106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:15.948926  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:16.683453  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.184073  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:16.873511  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.372353  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:18.446864  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:20.449087  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:21.184209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.683535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:21.373869  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.872606  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:22.947224  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.947961  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:26.183587  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.184349  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:26.373049  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.873435  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.447254  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:29.447470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:30.683953  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.183412  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.371635  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.373244  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.448173  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.458099  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.947741  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:35.184447  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.682974  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.872226  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.872308  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:39.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:38.447494  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:40.947078  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:39.686907  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.183930  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.184443  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.372207  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.872360  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.947712  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:45.447011  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:46.682572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.683539  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.372618  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.373137  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.448127  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.947787  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:50.683784  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:53.183034  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.873417  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.373414  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.948470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.447675  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:55.184058  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.683581  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.872213  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.872323  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.447790  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.947901  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.948561  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:00.182341  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.183137  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:04.186590  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.872469  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.872708  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.372543  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.447536  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.947676  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.683572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.183605  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.373804  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.872253  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.947764  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.948705  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:11.184118  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:13.184173  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.372084  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.373845  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.447832  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.948882  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:15.682883  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.184458  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:16.374416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.872958  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:17.446820  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.448067  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:20.686987  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.182360  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.372104  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.872156  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.947427  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.950711  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:25.183749  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:27.184437  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.372132  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.372299  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.948117  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.948772  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:10:29.682860  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:31.683468  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:34.184018  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.872607  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:32.872784  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.373822  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:33.447573  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.447614  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:36.682470  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:38.683099  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.872270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.872490  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.450208  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.947418  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:40.683975  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.182280  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.372711  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.873233  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.446673  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.447301  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:45.188927  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.682373  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.371936  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:49.372190  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.447866  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:48.947579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.683659  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.182955  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:54.184535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.373113  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.373239  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.446974  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.447804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.947655  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:56.682535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.683610  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.872286  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:57.872418  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:59.872720  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.447354  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:00.447456  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:00.684164  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:03.182802  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.873903  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.372767  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:02.947088  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.948196  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:05.683195  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.184305  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:06.872641  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.872811  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.447814  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:09.948377  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:10.186012  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.682829  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:11.374597  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:13.872241  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.447966  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.448396  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.683559  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.185276  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.372851  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:18.872479  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.947677  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:19.449701  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:19.682652  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.683209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.684681  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.371629  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.373290  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.947319  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:24.446685  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.183070  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.185526  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:25.872866  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.371245  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.371878  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.447559  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.948355  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.949170  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:30.683714  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.182628  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.373119  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:34.872336  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.447847  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:35.947756  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:35.183021  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.682254  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.371644  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:39.372270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.447549  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:40.946928  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.182928  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.873545  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.372751  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.947699  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.949106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:44.684067  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.182248  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.183397  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:46.871983  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:48.872101  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.447284  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.448275  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.949171  585014 pod_ready.go:82] duration metric: took 4m0.008012606s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:11:50.949202  585014 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:11:50.949213  585014 pod_ready.go:39] duration metric: took 4m6.974004451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:11:50.949249  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:11:50.949290  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.949351  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.998560  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:50.998584  585014 cri.go:89] found id: ""
	I1008 19:11:50.998591  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:11:50.998649  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.003407  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:51.003490  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:51.183611  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:53.683418  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.872692  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:52.873320  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:55.372722  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:51.049315  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:51.049343  585014 cri.go:89] found id: ""
	I1008 19:11:51.049353  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:11:51.049411  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.055212  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:51.055281  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:51.101271  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.101292  585014 cri.go:89] found id: ""
	I1008 19:11:51.101300  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:11:51.101360  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.105902  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:51.105966  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:51.150355  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.150390  585014 cri.go:89] found id: ""
	I1008 19:11:51.150402  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:11:51.150468  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.155116  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:51.155193  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:51.197754  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:51.197779  585014 cri.go:89] found id: ""
	I1008 19:11:51.197790  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:11:51.197846  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.201957  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:51.202023  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:51.239982  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:51.240001  585014 cri.go:89] found id: ""
	I1008 19:11:51.240009  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:11:51.240064  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.244580  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:51.244645  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:51.280099  585014 cri.go:89] found id: ""
	I1008 19:11:51.280126  585014 logs.go:282] 0 containers: []
	W1008 19:11:51.280137  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:51.280144  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:11:51.280205  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:11:51.323467  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:51.323508  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:51.323514  585014 cri.go:89] found id: ""
	I1008 19:11:51.323525  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:11:51.323676  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.328091  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.332113  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:51.332139  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:11:51.455430  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:11:51.455463  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.492792  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:11:51.492824  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.533732  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.533768  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:52.085919  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:11:52.085972  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:52.120874  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:52.120912  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:11:52.163961  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164188  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.164330  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164489  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.195681  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:52.195716  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:52.210569  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:11:52.210601  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:52.256667  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:11:52.256700  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:52.303627  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:11:52.303685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:52.340250  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:11:52.340279  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:52.402179  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:11:52.402213  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:52.440288  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:11:52.440326  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:52.478952  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.478979  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:11:52.479043  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:11:52.479060  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479068  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.479077  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479084  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.479092  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.479101  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.182942  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:58.184307  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:57.373902  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:59.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:00.683230  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:03.184294  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.372439  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:04.871327  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.480085  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.498401  585014 api_server.go:72] duration metric: took 4m26.226421652s to wait for apiserver process to appear ...
	I1008 19:12:02.498433  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:12:02.498479  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.498544  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:02.533531  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:02.533563  585014 cri.go:89] found id: ""
	I1008 19:12:02.533575  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:02.533643  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.537914  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:02.537985  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:02.579011  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:02.579039  585014 cri.go:89] found id: ""
	I1008 19:12:02.579049  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:02.579111  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.583628  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:02.583695  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:02.625038  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.625065  585014 cri.go:89] found id: ""
	I1008 19:12:02.625075  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:02.625138  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.629262  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:02.629331  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:02.662964  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:02.662988  585014 cri.go:89] found id: ""
	I1008 19:12:02.662997  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:02.663052  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.666955  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:02.667013  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:02.704552  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:02.704578  585014 cri.go:89] found id: ""
	I1008 19:12:02.704589  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:02.704640  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.708910  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:02.708962  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:02.743196  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.743220  585014 cri.go:89] found id: ""
	I1008 19:12:02.743229  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:02.743276  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.747488  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:02.747563  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:02.789367  585014 cri.go:89] found id: ""
	I1008 19:12:02.789405  585014 logs.go:282] 0 containers: []
	W1008 19:12:02.789418  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:02.789426  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:02.789479  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:02.828607  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:02.828640  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.828646  585014 cri.go:89] found id: ""
	I1008 19:12:02.828656  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:02.828723  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.832981  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.837258  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:02.837284  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.874214  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:02.874249  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.925844  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:02.925879  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.963715  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:02.963744  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.009069  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.009102  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:03.046628  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.046816  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.046947  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.047129  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.080027  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.080068  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:03.203192  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:03.203233  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:03.254645  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:03.254681  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:03.300881  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:03.300918  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:03.347403  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.347440  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.802754  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.802801  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.816658  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:03.816695  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:03.873630  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:03.873670  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:03.910834  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.910862  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:03.910932  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:03.910946  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910955  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.910972  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910983  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.910994  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.911006  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:05.682916  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:07.683273  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:06.872011  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:08.873682  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:09.684055  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.182786  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:14.186868  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:10.866893  585096 pod_ready.go:82] duration metric: took 4m0.000956825s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:10.866937  585096 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1008 19:12:10.866961  585096 pod_ready.go:39] duration metric: took 4m15.184400794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:10.866992  585096 kubeadm.go:597] duration metric: took 4m23.829186185s to restartPrimaryControlPlane
	W1008 19:12:10.867049  585096 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:10.867092  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:13.911944  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:12:13.917530  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:12:13.918513  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:12:13.918537  585014 api_server.go:131] duration metric: took 11.420096691s to wait for apiserver health ...
	I1008 19:12:13.918546  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:12:13.918570  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:13.918621  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:13.957026  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:13.957048  585014 cri.go:89] found id: ""
	I1008 19:12:13.957057  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:13.957114  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:13.961553  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:13.961611  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:13.996466  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:13.996497  585014 cri.go:89] found id: ""
	I1008 19:12:13.996508  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:13.996570  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.000972  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:14.001036  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:14.034888  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.034917  585014 cri.go:89] found id: ""
	I1008 19:12:14.034929  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:14.034989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.039145  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:14.039216  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:14.074109  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:14.074134  585014 cri.go:89] found id: ""
	I1008 19:12:14.074145  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:14.074202  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.078291  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:14.078371  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:14.113375  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:14.113403  585014 cri.go:89] found id: ""
	I1008 19:12:14.113413  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:14.113475  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.117909  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:14.118002  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:14.153800  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:14.153823  585014 cri.go:89] found id: ""
	I1008 19:12:14.153833  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:14.153898  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.158233  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:14.158302  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:14.195093  585014 cri.go:89] found id: ""
	I1008 19:12:14.195123  585014 logs.go:282] 0 containers: []
	W1008 19:12:14.195133  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:14.195142  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:14.195203  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:14.230894  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:14.230917  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:14.230921  585014 cri.go:89] found id: ""
	I1008 19:12:14.230929  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:14.230989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.236299  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.240914  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:14.240940  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:14.282289  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282488  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:14.282643  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282824  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:14.315207  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:14.315235  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:14.433616  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:14.433647  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:14.482640  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:14.482685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.524749  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:14.524788  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:14.979562  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:14.979629  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:15.016898  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:15.016941  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:15.058447  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:15.058478  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:15.114345  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:15.114384  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:15.128920  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:15.128948  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:15.176775  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:15.176817  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:15.215091  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:15.215129  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:15.256687  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:15.256731  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:15.311551  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311583  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:15.311641  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:15.311653  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311664  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:15.311676  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311681  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:15.311687  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311695  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:16.682464  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:19.182595  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:21.184074  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:23.682867  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:25.319305  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:12:25.319336  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.319340  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.319344  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.319348  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.319351  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.319354  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.319362  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.319365  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.319371  585014 system_pods.go:74] duration metric: took 11.400819931s to wait for pod list to return data ...
	I1008 19:12:25.319378  585014 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:12:25.322115  585014 default_sa.go:45] found service account: "default"
	I1008 19:12:25.322135  585014 default_sa.go:55] duration metric: took 2.751457ms for default service account to be created ...
	I1008 19:12:25.322143  585014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:12:25.326570  585014 system_pods.go:86] 8 kube-system pods found
	I1008 19:12:25.326590  585014 system_pods.go:89] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.326595  585014 system_pods.go:89] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.326599  585014 system_pods.go:89] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.326604  585014 system_pods.go:89] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.326610  585014 system_pods.go:89] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.326615  585014 system_pods.go:89] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.326625  585014 system_pods.go:89] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.326633  585014 system_pods.go:89] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.326642  585014 system_pods.go:126] duration metric: took 4.494323ms to wait for k8s-apps to be running ...
	I1008 19:12:25.326651  585014 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:12:25.326701  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:25.344597  585014 system_svc.go:56] duration metric: took 17.941012ms WaitForService to wait for kubelet
	I1008 19:12:25.344621  585014 kubeadm.go:582] duration metric: took 4m49.072648847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:12:25.344638  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:12:25.347385  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:12:25.347404  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:12:25.347425  585014 node_conditions.go:105] duration metric: took 2.783181ms to run NodePressure ...
	I1008 19:12:25.347437  585014 start.go:241] waiting for startup goroutines ...
	I1008 19:12:25.347450  585014 start.go:246] waiting for cluster config update ...
	I1008 19:12:25.347463  585014 start.go:255] writing updated cluster config ...
	I1008 19:12:25.347823  585014 ssh_runner.go:195] Run: rm -f paused
	I1008 19:12:25.395903  585014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:12:25.397911  585014 out.go:177] * Done! kubectl is now configured to use "embed-certs-783146" cluster and "default" namespace by default
	I1008 19:12:25.683645  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:28.182995  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:30.183567  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:32.682881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.013046  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.145916528s)
	I1008 19:12:37.013156  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:37.028010  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:37.037493  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:37.046435  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:37.046455  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:37.046495  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:12:37.055422  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:37.055482  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:37.064538  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:12:37.072968  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:37.073021  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:37.081754  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.090143  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:37.090179  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.098726  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:12:37.107261  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:37.107308  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:37.115975  585096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:37.163570  585096 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:12:37.163642  585096 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:37.272891  585096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:37.273025  585096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:37.273151  585096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:12:37.284204  585096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:37.286084  585096 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:37.286175  585096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:37.286263  585096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:37.286385  585096 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:37.286443  585096 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:37.286545  585096 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:37.286638  585096 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:37.286729  585096 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:37.286812  585096 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:37.286912  585096 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:37.287010  585096 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:37.287082  585096 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:37.287172  585096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:37.602946  585096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:37.727897  585096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:12:37.932126  585096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:37.989742  585096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:38.036655  585096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:38.037085  585096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:38.040618  585096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:35.182881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.683718  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:38.042238  585096 out.go:235]   - Booting up control plane ...
	I1008 19:12:38.042374  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:38.042568  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:38.043504  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:38.065666  585096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:38.071727  585096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:38.071814  585096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:38.210382  585096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:12:38.210516  585096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:12:39.213697  585096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003319891s
	I1008 19:12:39.213803  585096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:12:43.717718  585096 kubeadm.go:310] [api-check] The API server is healthy after 4.502167036s
	I1008 19:12:43.728628  585096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:12:43.744283  585096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:12:43.775369  585096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:12:43.775621  585096 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-142496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:12:43.788583  585096 kubeadm.go:310] [bootstrap-token] Using token: srsq4v.7le212xun40ljc7w
	I1008 19:12:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:42.183680  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:44.185065  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:43.789834  585096 out.go:235]   - Configuring RBAC rules ...
	I1008 19:12:43.789945  585096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:12:43.796091  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:12:43.807906  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:12:43.811025  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:12:43.814445  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:12:43.817615  585096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:12:44.122839  585096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:12:44.567387  585096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:12:45.122714  585096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:12:45.123480  585096 kubeadm.go:310] 
	I1008 19:12:45.123590  585096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:12:45.123617  585096 kubeadm.go:310] 
	I1008 19:12:45.123740  585096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:12:45.123749  585096 kubeadm.go:310] 
	I1008 19:12:45.123789  585096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:12:45.123870  585096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:12:45.123958  585096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:12:45.123984  585096 kubeadm.go:310] 
	I1008 19:12:45.124064  585096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:12:45.124080  585096 kubeadm.go:310] 
	I1008 19:12:45.124152  585096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:12:45.124162  585096 kubeadm.go:310] 
	I1008 19:12:45.124248  585096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:12:45.124366  585096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:12:45.124456  585096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:12:45.124469  585096 kubeadm.go:310] 
	I1008 19:12:45.124579  585096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:12:45.124682  585096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:12:45.124692  585096 kubeadm.go:310] 
	I1008 19:12:45.124804  585096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.124926  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:12:45.124953  585096 kubeadm.go:310] 	--control-plane 
	I1008 19:12:45.124958  585096 kubeadm.go:310] 
	I1008 19:12:45.125086  585096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:12:45.125093  585096 kubeadm.go:310] 
	I1008 19:12:45.125182  585096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.125321  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:12:45.126852  585096 kubeadm.go:310] W1008 19:12:37.105673    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127231  585096 kubeadm.go:310] W1008 19:12:37.106373    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127380  585096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:12:45.127429  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:12:45.127452  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:12:45.129742  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:12:45.130870  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:12:45.143909  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:12:45.170901  585096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:12:45.170965  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:45.170972  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-142496 minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=default-k8s-diff-port-142496 minikube.k8s.io/primary=true
	I1008 19:12:45.198031  585096 ops.go:34] apiserver oom_adj: -16
	I1008 19:12:45.385789  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.684251  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:49.183225  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:45.886434  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.386165  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.886920  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.386786  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.885835  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.386706  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.885981  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.386856  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.471554  585096 kubeadm.go:1113] duration metric: took 4.300656747s to wait for elevateKubeSystemPrivileges
	I1008 19:12:49.471596  585096 kubeadm.go:394] duration metric: took 5m2.486064826s to StartCluster
	I1008 19:12:49.471627  585096 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.471736  585096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:12:49.473381  585096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.473676  585096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:12:49.473768  585096 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:12:49.473874  585096 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473897  585096 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142496"
	I1008 19:12:49.473899  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:12:49.473904  585096 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473923  585096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142496"
	W1008 19:12:49.473907  585096 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:12:49.473939  585096 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473955  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.473967  585096 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.473981  585096 addons.go:243] addon metrics-server should already be in state true
	I1008 19:12:49.474022  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.474283  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474313  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474338  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474366  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474373  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474405  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.475217  585096 out.go:177] * Verifying Kubernetes components...
	I1008 19:12:49.476402  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:12:49.490880  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1008 19:12:49.491405  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.492070  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.492093  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.492454  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.492990  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.493040  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.493623  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 19:12:49.493646  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1008 19:12:49.494011  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494067  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494548  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494565  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494763  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494790  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494930  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495102  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.495276  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495871  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.495908  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.498744  585096 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.498764  585096 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:12:49.498787  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.499142  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.499173  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.514047  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1008 19:12:49.514527  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.515028  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.515046  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.515493  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.515662  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.516519  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1008 19:12:49.517015  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.517643  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.517661  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.517706  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.517757  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I1008 19:12:49.518133  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.518458  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.518617  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.518643  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.518681  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.519107  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.519527  585096 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:12:49.519808  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.519923  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.520415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.520624  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:12:49.520644  585096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:12:49.520669  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.522226  585096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:12:49.523372  585096 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.523396  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:12:49.523415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.523947  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524437  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.524464  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.524830  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.525042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.525198  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.527349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.527693  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527842  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.528009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.528186  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.528325  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.536509  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I1008 19:12:49.536879  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.537341  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.537359  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.537606  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.537897  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.539570  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.539810  585096 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.539831  585096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:12:49.539848  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.542955  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.543522  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543543  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.543726  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.543888  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.544023  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.721845  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:12:49.741622  585096 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.763968  585096 node_ready.go:49] node "default-k8s-diff-port-142496" has status "Ready":"True"
	I1008 19:12:49.764005  585096 node_ready.go:38] duration metric: took 22.348135ms for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.764019  585096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:49.793150  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:49.867565  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.904041  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.912694  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:12:49.912723  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:12:49.962053  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:12:49.962082  585096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:12:50.004678  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.004709  585096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:12:50.068528  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.394807  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394824  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394836  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.394841  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395140  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395161  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395172  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395181  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395181  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395195  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395201  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.395205  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395262  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395425  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395439  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395616  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395668  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395643  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416509  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.416532  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.416815  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416865  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.416880  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634404  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634428  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634722  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.634744  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634752  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634761  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634769  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635036  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635066  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.635079  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.635100  585096 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-142496"
	I1008 19:12:50.636555  585096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:12:51.683959  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.182376  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:50.637816  585096 addons.go:510] duration metric: took 1.164063633s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:12:51.799881  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.299619  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:56.183179  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683102  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683159  584371 pod_ready.go:82] duration metric: took 4m0.006623922s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:58.683173  584371 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:12:58.683184  584371 pod_ready.go:39] duration metric: took 4m4.541923995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:58.683207  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:12:58.683245  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:58.683296  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:58.729385  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:58.729407  584371 cri.go:89] found id: ""
	I1008 19:12:58.729417  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:12:58.729472  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.734291  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:58.734382  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:58.772015  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:12:58.772050  584371 cri.go:89] found id: ""
	I1008 19:12:58.772062  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:12:58.772123  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.776231  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:58.776300  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:58.812962  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:58.812982  584371 cri.go:89] found id: ""
	I1008 19:12:58.812991  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:12:58.813046  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.816951  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:58.817002  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:58.852918  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:58.852939  584371 cri.go:89] found id: ""
	I1008 19:12:58.852946  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:12:58.852992  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.857184  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:58.857245  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:58.895233  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:12:58.895254  584371 cri.go:89] found id: ""
	I1008 19:12:58.895264  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:12:58.895317  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.899301  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:58.899354  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:58.933918  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:58.933946  584371 cri.go:89] found id: ""
	I1008 19:12:58.933956  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:12:58.934003  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.938274  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:58.938361  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:58.980067  584371 cri.go:89] found id: ""
	I1008 19:12:58.980094  584371 logs.go:282] 0 containers: []
	W1008 19:12:58.980104  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:58.980113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:58.980174  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:59.013783  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:12:59.013812  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.013817  584371 cri.go:89] found id: ""
	I1008 19:12:59.013827  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:12:59.013886  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.018420  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.024462  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:12:59.024486  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.062654  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:12:59.062688  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:59.110932  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:59.110966  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:59.248699  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:12:59.248734  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:59.294439  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:12:59.294473  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:59.331208  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:12:59.331241  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:59.374242  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:12:59.374283  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:56.799487  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.800290  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:59.800320  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.800349  585096 pod_ready.go:82] duration metric: took 10.007162242s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.800361  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804590  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.804609  585096 pod_ready.go:82] duration metric: took 4.240474ms for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804620  585096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808737  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.808754  585096 pod_ready.go:82] duration metric: took 4.127686ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808762  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813126  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.813146  585096 pod_ready.go:82] duration metric: took 4.37796ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813154  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817020  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.817039  585096 pod_ready.go:82] duration metric: took 3.878053ms for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817048  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197958  585096 pod_ready.go:93] pod "kube-proxy-wd5kv" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.197983  585096 pod_ready.go:82] duration metric: took 380.928087ms for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197992  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597495  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.597521  585096 pod_ready.go:82] duration metric: took 399.522182ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597529  585096 pod_ready.go:39] duration metric: took 10.833495765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:13:00.597545  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:13:00.597612  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:00.613266  585096 api_server.go:72] duration metric: took 11.139554705s to wait for apiserver process to appear ...
	I1008 19:13:00.613289  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:00.613308  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:13:00.618420  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:13:00.619376  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:00.619399  585096 api_server.go:131] duration metric: took 6.102941ms to wait for apiserver health ...
	I1008 19:13:00.619407  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:00.800687  585096 system_pods.go:59] 9 kube-system pods found
	I1008 19:13:00.800720  585096 system_pods.go:61] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:00.800729  585096 system_pods.go:61] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:00.800733  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:00.800737  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:00.800740  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:00.800743  585096 system_pods.go:61] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:00.800747  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:00.800752  585096 system_pods.go:61] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:00.800755  585096 system_pods.go:61] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:00.800765  585096 system_pods.go:74] duration metric: took 181.352111ms to wait for pod list to return data ...
	I1008 19:13:00.800773  585096 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:00.997631  585096 default_sa.go:45] found service account: "default"
	I1008 19:13:00.997657  585096 default_sa.go:55] duration metric: took 196.876434ms for default service account to be created ...
	I1008 19:13:00.997667  585096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:01.199366  585096 system_pods.go:86] 9 kube-system pods found
	I1008 19:13:01.199396  585096 system_pods.go:89] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:01.199402  585096 system_pods.go:89] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:01.199406  585096 system_pods.go:89] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:01.199409  585096 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:01.199413  585096 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:01.199416  585096 system_pods.go:89] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:01.199419  585096 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:01.199426  585096 system_pods.go:89] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:01.199430  585096 system_pods.go:89] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:01.199439  585096 system_pods.go:126] duration metric: took 201.766214ms to wait for k8s-apps to be running ...
	I1008 19:13:01.199447  585096 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:01.199492  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:01.214863  585096 system_svc.go:56] duration metric: took 15.401989ms WaitForService to wait for kubelet
	I1008 19:13:01.214895  585096 kubeadm.go:582] duration metric: took 11.741185862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:01.214919  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:01.397506  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:01.397530  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:01.397541  585096 node_conditions.go:105] duration metric: took 182.616774ms to run NodePressure ...
	I1008 19:13:01.397553  585096 start.go:241] waiting for startup goroutines ...
	I1008 19:13:01.397560  585096 start.go:246] waiting for cluster config update ...
	I1008 19:13:01.397570  585096 start.go:255] writing updated cluster config ...
	I1008 19:13:01.397828  585096 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:01.448158  585096 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:01.450201  585096 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142496" cluster and "default" namespace by default
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:59.438777  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:59.438814  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:59.945253  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:59.945302  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:00.016570  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:00.016607  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:00.034150  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:00.034183  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:00.075423  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:00.075456  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:00.111132  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:00.111164  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.646570  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:02.666594  584371 api_server.go:72] duration metric: took 4m13.762192057s to wait for apiserver process to appear ...
	I1008 19:13:02.666620  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:02.666663  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:02.666718  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:02.704214  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:02.704242  584371 cri.go:89] found id: ""
	I1008 19:13:02.704250  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:02.704298  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.708636  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:02.708717  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:02.748418  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:02.748444  584371 cri.go:89] found id: ""
	I1008 19:13:02.748455  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:02.748515  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.753267  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:02.753332  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:02.790534  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:02.790562  584371 cri.go:89] found id: ""
	I1008 19:13:02.790571  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:02.790636  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.794880  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:02.794950  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:02.834754  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:02.834774  584371 cri.go:89] found id: ""
	I1008 19:13:02.834781  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:02.834830  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.839391  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:02.839463  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:02.878344  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:02.878371  584371 cri.go:89] found id: ""
	I1008 19:13:02.878380  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:02.878425  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.882939  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:02.883025  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:02.920081  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:02.920104  584371 cri.go:89] found id: ""
	I1008 19:13:02.920112  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:02.920168  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.924141  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:02.924205  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:02.959700  584371 cri.go:89] found id: ""
	I1008 19:13:02.959730  584371 logs.go:282] 0 containers: []
	W1008 19:13:02.959741  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:02.959750  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:02.959822  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:02.996900  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.996927  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:02.996933  584371 cri.go:89] found id: ""
	I1008 19:13:02.996940  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:02.996989  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.001152  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.005021  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:03.005046  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:03.069775  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:03.069813  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:03.120028  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:03.120060  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:03.155756  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:03.155784  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:03.195587  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:03.195624  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:03.231844  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:03.231875  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:03.271156  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:03.271187  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:03.286994  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:03.287017  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:03.397237  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:03.397269  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:03.442373  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:03.442407  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:03.500191  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:03.500222  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:03.535448  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:03.535490  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:03.966382  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:03.966425  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:06.513885  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:13:06.518111  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:13:06.519310  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:06.519331  584371 api_server.go:131] duration metric: took 3.852704338s to wait for apiserver health ...
	I1008 19:13:06.519341  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:06.519370  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:06.519417  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:06.558940  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:06.558965  584371 cri.go:89] found id: ""
	I1008 19:13:06.558979  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:06.559029  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.563471  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:06.563537  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:06.607844  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:06.607873  584371 cri.go:89] found id: ""
	I1008 19:13:06.607883  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:06.607944  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.612399  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:06.612456  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:06.645502  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:06.645521  584371 cri.go:89] found id: ""
	I1008 19:13:06.645528  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:06.645575  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.649442  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:06.649519  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:06.685085  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:06.685114  584371 cri.go:89] found id: ""
	I1008 19:13:06.685126  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:06.685183  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.689859  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:06.689935  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:06.724775  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:06.724803  584371 cri.go:89] found id: ""
	I1008 19:13:06.724814  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:06.724873  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.729489  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:06.729542  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:06.776599  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:06.776626  584371 cri.go:89] found id: ""
	I1008 19:13:06.776636  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:06.776704  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.780790  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:06.780863  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:06.817072  584371 cri.go:89] found id: ""
	I1008 19:13:06.817097  584371 logs.go:282] 0 containers: []
	W1008 19:13:06.817106  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:06.817113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:06.817171  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:06.855429  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:06.855453  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:06.855457  584371 cri.go:89] found id: ""
	I1008 19:13:06.855465  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:06.855520  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.859774  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.863800  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:06.863821  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:06.931413  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:06.931443  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:06.946213  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:06.946236  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:07.070604  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:07.070640  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:07.114749  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:07.114782  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:07.152555  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:07.152584  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:07.192730  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:07.192759  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:07.242001  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:07.242036  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:07.612662  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:07.612714  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:07.656655  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:07.656700  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:07.695462  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:07.695494  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:07.733107  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:07.733143  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:07.779348  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:07.779382  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:10.325584  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:13:10.325616  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.325620  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.325624  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.325628  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.325631  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.325634  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.325639  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.325644  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.325651  584371 system_pods.go:74] duration metric: took 3.806304739s to wait for pod list to return data ...
	I1008 19:13:10.325659  584371 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:10.328062  584371 default_sa.go:45] found service account: "default"
	I1008 19:13:10.328082  584371 default_sa.go:55] duration metric: took 2.41797ms for default service account to be created ...
	I1008 19:13:10.328089  584371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:10.332201  584371 system_pods.go:86] 8 kube-system pods found
	I1008 19:13:10.332224  584371 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.332229  584371 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.332233  584371 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.332237  584371 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.332241  584371 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.332245  584371 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.332250  584371 system_pods.go:89] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.332254  584371 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.332261  584371 system_pods.go:126] duration metric: took 4.167739ms to wait for k8s-apps to be running ...
	I1008 19:13:10.332270  584371 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:10.332313  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:10.350257  584371 system_svc.go:56] duration metric: took 17.979349ms WaitForService to wait for kubelet
	I1008 19:13:10.350288  584371 kubeadm.go:582] duration metric: took 4m21.445892386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:10.350310  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:10.352582  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:10.352598  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:10.352609  584371 node_conditions.go:105] duration metric: took 2.294326ms to run NodePressure ...
	I1008 19:13:10.352620  584371 start.go:241] waiting for startup goroutines ...
	I1008 19:13:10.352626  584371 start.go:246] waiting for cluster config update ...
	I1008 19:13:10.352636  584371 start.go:255] writing updated cluster config ...
	I1008 19:13:10.352882  584371 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:10.401998  584371 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:10.404037  584371 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 
	
	
	==> CRI-O <==
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.701385795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415513701347692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa0d0a71-9032-403b-ae61-a02602afb1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.701908707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a28de3b6-1125-4d3d-9b85-77c0ccbc1873 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.701970172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a28de3b6-1125-4d3d-9b85-77c0ccbc1873 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.702021332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a28de3b6-1125-4d3d-9b85-77c0ccbc1873 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.731831446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c01af5fe-3637-493a-bc7f-c4da85305a14 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.731894467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c01af5fe-3637-493a-bc7f-c4da85305a14 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.732862035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0311744e-9b9f-4aef-b611-f75e8edfcf08 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.733282094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415513733258604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0311744e-9b9f-4aef-b611-f75e8edfcf08 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.733739160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06aedeee-428b-416f-ba5f-56552f8e923e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.733790014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06aedeee-428b-416f-ba5f-56552f8e923e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.733821324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=06aedeee-428b-416f-ba5f-56552f8e923e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.766211164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6145ff6a-d2d0-4457-aa79-6c3e451ed792 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.766299086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6145ff6a-d2d0-4457-aa79-6c3e451ed792 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.767604944Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3271670-533b-4014-addd-e81d7dbfea1c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.768025094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415513768005519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3271670-533b-4014-addd-e81d7dbfea1c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.768655006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b25662c-c7ca-4706-92d8-d96467989f2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.768718220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b25662c-c7ca-4706-92d8-d96467989f2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.768759459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5b25662c-c7ca-4706-92d8-d96467989f2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.798422285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2952af56-6dea-4c97-8584-681827f5bcd6 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.798518437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2952af56-6dea-4c97-8584-681827f5bcd6 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.801301071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e155f50-1ba5-424c-9bc4-92b74caf07d8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.801861085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415513801832007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e155f50-1ba5-424c-9bc4-92b74caf07d8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.802492110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0cb9057-e01e-4787-ba50-048f12b124a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.802543017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0cb9057-e01e-4787-ba50-048f12b124a4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:25:13 old-k8s-version-256554 crio[632]: time="2024-10-08 19:25:13.802571450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e0cb9057-e01e-4787-ba50-048f12b124a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050416] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044675] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.049563] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.581000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586261] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 8 19:08] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.059019] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068335] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.205375] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.133900] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.277385] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.210273] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066679] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.142543] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +12.037421] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 8 19:12] systemd-fstab-generator[5070]: Ignoring "noauto" option for root device
	[Oct 8 19:14] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.062152] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:25:13 up 17 min,  0 users,  load average: 0.00, 0.04, 0.01
	Linux old-k8s-version-256554 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000c1c840, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: net.cgoIPLookup(0xc000d39320, 0x48ab5d6, 0x3, 0xc000c1c840, 0x1f)
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: created by net.cgoLookupIP
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: goroutine 135 [select]:
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0004829b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001c47e0, 0x0, 0x0)
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008c28c0)
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 08 19:25:08 old-k8s-version-256554 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 08 19:25:08 old-k8s-version-256554 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 08 19:25:08 old-k8s-version-256554 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 08 19:25:09 old-k8s-version-256554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 08 19:25:09 old-k8s-version-256554 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 08 19:25:09 old-k8s-version-256554 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 08 19:25:09 old-k8s-version-256554 kubelet[6527]: I1008 19:25:09.165007    6527 server.go:416] Version: v1.20.0
	Oct 08 19:25:09 old-k8s-version-256554 kubelet[6527]: I1008 19:25:09.165399    6527 server.go:837] Client rotation is on, will bootstrap in background
	Oct 08 19:25:09 old-k8s-version-256554 kubelet[6527]: I1008 19:25:09.167797    6527 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 08 19:25:09 old-k8s-version-256554 kubelet[6527]: W1008 19:25:09.168796    6527 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 08 19:25:09 old-k8s-version-256554 kubelet[6527]: I1008 19:25:09.168857    6527 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (242.425327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-256554" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (488.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783146 -n embed-certs-783146
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-08 19:29:35.964193532 +0000 UTC m=+6969.453345715
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-783146 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-783146 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.843µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-783146 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-783146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-783146 logs -n 25: (1.408742227s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:27 UTC | 08 Oct 24 19:28 UTC |
	| start   | -p newest-cni-602180 --memory=2200 --alsologtostderr   | newest-cni-602180            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC | 08 Oct 24 19:28 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC | 08 Oct 24 19:28 UTC |
	| start   | -p auto-981259 --memory=3072                           | auto-981259                  | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC | 08 Oct 24 19:29 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-602180             | newest-cni-602180            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC | 08 Oct 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-602180                                   | newest-cni-602180            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC | 08 Oct 24 19:28 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-602180                  | newest-cni-602180            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC | 08 Oct 24 19:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-602180 --memory=2200 --alsologtostderr   | newest-cni-602180            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p auto-981259 pgrep -a                                | auto-981259                  | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:28:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:28:58.686741  593060 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:28:58.686871  593060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:28:58.686881  593060 out.go:358] Setting ErrFile to fd 2...
	I1008 19:28:58.686886  593060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:28:58.687071  593060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:28:58.687590  593060 out.go:352] Setting JSON to false
	I1008 19:28:58.688563  593060 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11491,"bootTime":1728404248,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:28:58.688674  593060 start.go:139] virtualization: kvm guest
	I1008 19:28:58.778005  593060 out.go:177] * [newest-cni-602180] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:28:58.016796  592450 out.go:235]   - Generating certificates and keys ...
	I1008 19:28:58.016915  592450 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:28:58.016994  592450 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:28:58.037772  592450 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 19:28:58.446038  592450 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 19:28:58.535520  592450 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 19:28:58.611569  592450 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 19:28:58.842703  592450 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 19:28:58.842877  592450 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-981259 localhost] and IPs [192.168.61.179 127.0.0.1 ::1]
	I1008 19:28:58.900622  593060 notify.go:220] Checking for updates...
	I1008 19:28:58.975955  593060 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:28:59.106785  593060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:28:59.178242  593060 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:28:59.279344  593060 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:28:59.315739  593060 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:28:59.317189  593060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:28:59.319317  593060 config.go:182] Loaded profile config "newest-cni-602180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:28:59.319889  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:28:59.319963  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:28:59.335107  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I1008 19:28:59.335530  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:28:59.336042  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:28:59.336065  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:28:59.336419  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:28:59.336622  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:28:59.336911  593060 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:28:59.337318  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:28:59.337360  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:28:59.352617  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I1008 19:28:59.353006  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:28:59.353470  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:28:59.353490  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:28:59.353823  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:28:59.354043  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:28:59.389307  593060 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:28:59.390442  593060 start.go:297] selected driver: kvm2
	I1008 19:28:59.390460  593060 start.go:901] validating driver "kvm2" against &{Name:newest-cni-602180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-602180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:28:59.390614  593060 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:28:59.391590  593060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:28:59.391683  593060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:28:59.405970  593060 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:28:59.406380  593060 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 19:28:59.406412  593060 cni.go:84] Creating CNI manager for ""
	I1008 19:28:59.406457  593060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:28:59.406495  593060 start.go:340] cluster config:
	{Name:newest-cni-602180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-602180 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:28:59.406619  593060 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:28:59.409391  593060 out.go:177] * Starting "newest-cni-602180" primary control-plane node in "newest-cni-602180" cluster
	I1008 19:28:59.410772  593060 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:28:59.410818  593060 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 19:28:59.410835  593060 cache.go:56] Caching tarball of preloaded images
	I1008 19:28:59.410927  593060 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:28:59.410941  593060 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 19:28:59.411042  593060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/config.json ...
	I1008 19:28:59.411224  593060 start.go:360] acquireMachinesLock for newest-cni-602180: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:28:59.411266  593060 start.go:364] duration metric: took 23.95µs to acquireMachinesLock for "newest-cni-602180"
	I1008 19:28:59.411292  593060 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:28:59.411301  593060 fix.go:54] fixHost starting: 
	I1008 19:28:59.411562  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:28:59.411598  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:28:59.425867  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
	I1008 19:28:59.426296  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:28:59.426799  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:28:59.426836  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:28:59.427258  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:28:59.427486  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:28:59.427664  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetState
	I1008 19:28:59.429227  593060 fix.go:112] recreateIfNeeded on newest-cni-602180: state=Stopped err=<nil>
	I1008 19:28:59.429255  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	W1008 19:28:59.429408  593060 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:28:59.431348  593060 out.go:177] * Restarting existing kvm2 VM for "newest-cni-602180" ...
	I1008 19:28:58.931829  592450 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 19:28:58.932017  592450 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-981259 localhost] and IPs [192.168.61.179 127.0.0.1 ::1]
	I1008 19:28:59.070626  592450 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 19:28:59.233650  592450 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 19:28:59.373435  592450 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 19:28:59.373588  592450 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:28:59.549209  592450 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:28:59.850913  592450 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:28:59.952848  592450 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:29:00.039401  592450 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:29:00.242535  592450 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:29:00.243272  592450 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:29:00.245951  592450 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:28:59.432690  593060 main.go:141] libmachine: (newest-cni-602180) Calling .Start
	I1008 19:28:59.432850  593060 main.go:141] libmachine: (newest-cni-602180) Ensuring networks are active...
	I1008 19:28:59.433603  593060 main.go:141] libmachine: (newest-cni-602180) Ensuring network default is active
	I1008 19:28:59.433928  593060 main.go:141] libmachine: (newest-cni-602180) Ensuring network mk-newest-cni-602180 is active
	I1008 19:28:59.434286  593060 main.go:141] libmachine: (newest-cni-602180) Getting domain xml...
	I1008 19:28:59.435210  593060 main.go:141] libmachine: (newest-cni-602180) Creating domain...
	I1008 19:29:00.684847  593060 main.go:141] libmachine: (newest-cni-602180) Waiting to get IP...
	I1008 19:29:00.685772  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:00.686207  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:00.686337  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:00.686181  593095 retry.go:31] will retry after 247.784909ms: waiting for machine to come up
	I1008 19:29:00.935728  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:00.936303  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:00.936329  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:00.936257  593095 retry.go:31] will retry after 332.418997ms: waiting for machine to come up
	I1008 19:29:01.270864  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:01.271399  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:01.271422  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:01.271360  593095 retry.go:31] will retry after 345.108255ms: waiting for machine to come up
	I1008 19:29:01.617666  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:01.618194  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:01.618227  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:01.618158  593095 retry.go:31] will retry after 609.882696ms: waiting for machine to come up
	I1008 19:29:02.229982  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:02.230568  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:02.230619  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:02.230521  593095 retry.go:31] will retry after 580.883494ms: waiting for machine to come up
	I1008 19:29:02.813483  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:02.813980  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:02.813999  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:02.813894  593095 retry.go:31] will retry after 709.663193ms: waiting for machine to come up
	I1008 19:29:03.524702  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:03.525148  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:03.525175  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:03.525100  593095 retry.go:31] will retry after 1.097021275s: waiting for machine to come up
	I1008 19:29:00.247927  592450 out.go:235]   - Booting up control plane ...
	I1008 19:29:00.248053  592450 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:29:00.249351  592450 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:29:00.250263  592450 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:29:00.266384  592450 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:29:00.272744  592450 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:29:00.272817  592450 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:29:00.404224  592450 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:29:00.404346  592450 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:29:01.405675  592450 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001135663s
	I1008 19:29:01.405803  592450 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:29:06.411929  592450 kubeadm.go:310] [api-check] The API server is healthy after 5.003347654s
	I1008 19:29:06.430167  592450 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:29:06.450014  592450 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:29:06.483387  592450 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:29:06.483698  592450 kubeadm.go:310] [mark-control-plane] Marking the node auto-981259 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:29:06.495773  592450 kubeadm.go:310] [bootstrap-token] Using token: 9kicc9.uylwx6festvr0dvj
	I1008 19:29:06.497199  592450 out.go:235]   - Configuring RBAC rules ...
	I1008 19:29:06.497354  592450 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:29:06.502337  592450 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:29:06.510067  592450 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:29:06.514642  592450 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:29:06.518458  592450 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:29:06.524346  592450 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:29:06.819123  592450 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:29:07.259200  592450 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:29:07.817336  592450 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:29:07.818284  592450 kubeadm.go:310] 
	I1008 19:29:07.818393  592450 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:29:07.818414  592450 kubeadm.go:310] 
	I1008 19:29:07.818501  592450 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:29:07.818513  592450 kubeadm.go:310] 
	I1008 19:29:07.818547  592450 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:29:07.818612  592450 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:29:07.818676  592450 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:29:07.818688  592450 kubeadm.go:310] 
	I1008 19:29:07.818761  592450 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:29:07.818768  592450 kubeadm.go:310] 
	I1008 19:29:07.818812  592450 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:29:07.818819  592450 kubeadm.go:310] 
	I1008 19:29:07.818862  592450 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:29:07.818925  592450 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:29:07.818983  592450 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:29:07.818989  592450 kubeadm.go:310] 
	I1008 19:29:07.819058  592450 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:29:07.819123  592450 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:29:07.819129  592450 kubeadm.go:310] 
	I1008 19:29:07.819197  592450 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9kicc9.uylwx6festvr0dvj \
	I1008 19:29:07.819285  592450 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:29:07.819305  592450 kubeadm.go:310] 	--control-plane 
	I1008 19:29:07.819311  592450 kubeadm.go:310] 
	I1008 19:29:07.819380  592450 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:29:07.819386  592450 kubeadm.go:310] 
	I1008 19:29:07.819457  592450 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9kicc9.uylwx6festvr0dvj \
	I1008 19:29:07.819558  592450 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:29:07.820946  592450 kubeadm.go:310] W1008 19:28:57.659049     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:29:07.821210  592450 kubeadm.go:310] W1008 19:28:57.660294     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:29:07.821336  592450 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:29:07.821365  592450 cni.go:84] Creating CNI manager for ""
	I1008 19:29:07.821391  592450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:29:07.823336  592450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:29:04.624125  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:04.624706  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:04.624738  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:04.624642  593095 retry.go:31] will retry after 1.294695391s: waiting for machine to come up
	I1008 19:29:05.920663  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:05.921201  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:05.921225  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:05.921173  593095 retry.go:31] will retry after 1.426790165s: waiting for machine to come up
	I1008 19:29:07.349681  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:07.350197  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:07.350222  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:07.350154  593095 retry.go:31] will retry after 1.465339391s: waiting for machine to come up
	I1008 19:29:07.824595  592450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:29:07.841664  592450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:29:07.865123  592450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:29:07.865191  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:07.865233  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-981259 minikube.k8s.io/updated_at=2024_10_08T19_29_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=auto-981259 minikube.k8s.io/primary=true
	I1008 19:29:08.007673  592450 ops.go:34] apiserver oom_adj: -16
	I1008 19:29:08.014203  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:08.514458  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:09.014555  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:09.515308  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:10.015160  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:10.514488  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:11.014265  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:11.515225  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:12.014775  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:12.515052  592450 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:29:12.688550  592450 kubeadm.go:1113] duration metric: took 4.82343114s to wait for elevateKubeSystemPrivileges
	I1008 19:29:12.688584  592450 kubeadm.go:394] duration metric: took 15.192955179s to StartCluster
	I1008 19:29:12.688603  592450 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:12.688675  592450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:29:12.690444  592450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:12.690671  592450 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:29:12.690692  592450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 19:29:12.690760  592450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:29:12.690887  592450 addons.go:69] Setting storage-provisioner=true in profile "auto-981259"
	I1008 19:29:12.690911  592450 addons.go:234] Setting addon storage-provisioner=true in "auto-981259"
	I1008 19:29:12.690922  592450 config.go:182] Loaded profile config "auto-981259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:29:12.690942  592450 addons.go:69] Setting default-storageclass=true in profile "auto-981259"
	I1008 19:29:12.690957  592450 host.go:66] Checking if "auto-981259" exists ...
	I1008 19:29:12.690977  592450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-981259"
	I1008 19:29:12.691433  592450 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:12.691486  592450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:12.691497  592450 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:12.691549  592450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:12.692262  592450 out.go:177] * Verifying Kubernetes components...
	I1008 19:29:12.693832  592450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:29:12.706719  592450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39639
	I1008 19:29:12.706726  592450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1008 19:29:12.707129  592450 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:12.707254  592450 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:12.707596  592450 main.go:141] libmachine: Using API Version  1
	I1008 19:29:12.707621  592450 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:12.707795  592450 main.go:141] libmachine: Using API Version  1
	I1008 19:29:12.707817  592450 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:12.707942  592450 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:12.708146  592450 main.go:141] libmachine: (auto-981259) Calling .GetState
	I1008 19:29:12.708156  592450 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:12.708611  592450 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:12.708647  592450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:12.711948  592450 addons.go:234] Setting addon default-storageclass=true in "auto-981259"
	I1008 19:29:12.711990  592450 host.go:66] Checking if "auto-981259" exists ...
	I1008 19:29:12.712377  592450 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:12.712430  592450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:12.724077  592450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I1008 19:29:12.724464  592450 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:12.725000  592450 main.go:141] libmachine: Using API Version  1
	I1008 19:29:12.725030  592450 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:12.725324  592450 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:12.725535  592450 main.go:141] libmachine: (auto-981259) Calling .GetState
	I1008 19:29:12.727158  592450 main.go:141] libmachine: (auto-981259) Calling .DriverName
	I1008 19:29:12.727286  592450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46741
	I1008 19:29:12.727700  592450 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:12.728127  592450 main.go:141] libmachine: Using API Version  1
	I1008 19:29:12.728165  592450 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:12.728557  592450 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:12.729067  592450 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:12.729093  592450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:29:08.817404  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:08.817870  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:08.817896  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:08.817793  593095 retry.go:31] will retry after 1.78133746s: waiting for machine to come up
	I1008 19:29:10.601546  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:10.602094  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:10.602157  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:10.602054  593095 retry.go:31] will retry after 3.206212395s: waiting for machine to come up
	I1008 19:29:12.729100  592450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:12.730768  592450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:29:12.730791  592450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:29:12.730813  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHHostname
	I1008 19:29:12.735177  592450 main.go:141] libmachine: (auto-981259) DBG | domain auto-981259 has defined MAC address 52:54:00:b3:53:e3 in network mk-auto-981259
	I1008 19:29:12.735366  592450 main.go:141] libmachine: (auto-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:53:e3", ip: ""} in network mk-auto-981259: {Iface:virbr4 ExpiryTime:2024-10-08 20:28:41 +0000 UTC Type:0 Mac:52:54:00:b3:53:e3 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:auto-981259 Clientid:01:52:54:00:b3:53:e3}
	I1008 19:29:12.735407  592450 main.go:141] libmachine: (auto-981259) DBG | domain auto-981259 has defined IP address 192.168.61.179 and MAC address 52:54:00:b3:53:e3 in network mk-auto-981259
	I1008 19:29:12.735672  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHPort
	I1008 19:29:12.735856  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHKeyPath
	I1008 19:29:12.736275  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHUsername
	I1008 19:29:12.736439  592450 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/auto-981259/id_rsa Username:docker}
	I1008 19:29:12.743851  592450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1008 19:29:12.744269  592450 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:12.745070  592450 main.go:141] libmachine: Using API Version  1
	I1008 19:29:12.745105  592450 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:12.745394  592450 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:12.745600  592450 main.go:141] libmachine: (auto-981259) Calling .GetState
	I1008 19:29:12.747334  592450 main.go:141] libmachine: (auto-981259) Calling .DriverName
	I1008 19:29:12.747559  592450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:29:12.747576  592450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:29:12.747594  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHHostname
	I1008 19:29:12.750473  592450 main.go:141] libmachine: (auto-981259) DBG | domain auto-981259 has defined MAC address 52:54:00:b3:53:e3 in network mk-auto-981259
	I1008 19:29:12.750838  592450 main.go:141] libmachine: (auto-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:53:e3", ip: ""} in network mk-auto-981259: {Iface:virbr4 ExpiryTime:2024-10-08 20:28:41 +0000 UTC Type:0 Mac:52:54:00:b3:53:e3 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:auto-981259 Clientid:01:52:54:00:b3:53:e3}
	I1008 19:29:12.750866  592450 main.go:141] libmachine: (auto-981259) DBG | domain auto-981259 has defined IP address 192.168.61.179 and MAC address 52:54:00:b3:53:e3 in network mk-auto-981259
	I1008 19:29:12.751079  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHPort
	I1008 19:29:12.751281  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHKeyPath
	I1008 19:29:12.751459  592450 main.go:141] libmachine: (auto-981259) Calling .GetSSHUsername
	I1008 19:29:12.751625  592450 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/auto-981259/id_rsa Username:docker}
	I1008 19:29:13.064273  592450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:29:13.064322  592450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 19:29:13.112471  592450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:29:13.119797  592450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:29:13.130728  592450 node_ready.go:35] waiting up to 15m0s for node "auto-981259" to be "Ready" ...
	I1008 19:29:13.146735  592450 node_ready.go:49] node "auto-981259" has status "Ready":"True"
	I1008 19:29:13.146767  592450 node_ready.go:38] duration metric: took 15.997151ms for node "auto-981259" to be "Ready" ...
	I1008 19:29:13.146779  592450 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:29:13.163015  592450 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-gpwph" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:13.701211  592450 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1008 19:29:13.701333  592450 main.go:141] libmachine: Making call to close driver server
	I1008 19:29:13.701363  592450 main.go:141] libmachine: (auto-981259) Calling .Close
	I1008 19:29:13.701774  592450 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:29:13.701794  592450 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:29:13.701803  592450 main.go:141] libmachine: Making call to close driver server
	I1008 19:29:13.701810  592450 main.go:141] libmachine: (auto-981259) Calling .Close
	I1008 19:29:13.701843  592450 main.go:141] libmachine: (auto-981259) DBG | Closing plugin on server side
	I1008 19:29:13.702111  592450 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:29:13.702136  592450 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:29:13.702121  592450 main.go:141] libmachine: (auto-981259) DBG | Closing plugin on server side
	I1008 19:29:13.715729  592450 main.go:141] libmachine: Making call to close driver server
	I1008 19:29:13.715750  592450 main.go:141] libmachine: (auto-981259) Calling .Close
	I1008 19:29:13.716076  592450 main.go:141] libmachine: (auto-981259) DBG | Closing plugin on server side
	I1008 19:29:13.716111  592450 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:29:13.716120  592450 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:29:14.138685  592450 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.018845397s)
	I1008 19:29:14.138766  592450 main.go:141] libmachine: Making call to close driver server
	I1008 19:29:14.138780  592450 main.go:141] libmachine: (auto-981259) Calling .Close
	I1008 19:29:14.139175  592450 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:29:14.139197  592450 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:29:14.139206  592450 main.go:141] libmachine: Making call to close driver server
	I1008 19:29:14.139207  592450 main.go:141] libmachine: (auto-981259) DBG | Closing plugin on server side
	I1008 19:29:14.139214  592450 main.go:141] libmachine: (auto-981259) Calling .Close
	I1008 19:29:14.139480  592450 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:29:14.139501  592450 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:29:14.141097  592450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1008 19:29:13.811396  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:13.811937  593060 main.go:141] libmachine: (newest-cni-602180) DBG | unable to find current IP address of domain newest-cni-602180 in network mk-newest-cni-602180
	I1008 19:29:13.811971  593060 main.go:141] libmachine: (newest-cni-602180) DBG | I1008 19:29:13.811875  593095 retry.go:31] will retry after 4.465533845s: waiting for machine to come up
	I1008 19:29:18.279754  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.280182  593060 main.go:141] libmachine: (newest-cni-602180) Found IP for machine: 192.168.39.20
	I1008 19:29:18.280225  593060 main.go:141] libmachine: (newest-cni-602180) Reserving static IP address...
	I1008 19:29:18.280255  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has current primary IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.280694  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "newest-cni-602180", mac: "52:54:00:e7:06:67", ip: "192.168.39.20"} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.280721  593060 main.go:141] libmachine: (newest-cni-602180) Reserved static IP address: 192.168.39.20
	I1008 19:29:18.280732  593060 main.go:141] libmachine: (newest-cni-602180) DBG | skip adding static IP to network mk-newest-cni-602180 - found existing host DHCP lease matching {name: "newest-cni-602180", mac: "52:54:00:e7:06:67", ip: "192.168.39.20"}
	I1008 19:29:18.280743  593060 main.go:141] libmachine: (newest-cni-602180) DBG | Getting to WaitForSSH function...
	I1008 19:29:18.280750  593060 main.go:141] libmachine: (newest-cni-602180) Waiting for SSH to be available...
	I1008 19:29:18.282963  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.283283  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.283313  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.283379  593060 main.go:141] libmachine: (newest-cni-602180) DBG | Using SSH client type: external
	I1008 19:29:18.283440  593060 main.go:141] libmachine: (newest-cni-602180) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa (-rw-------)
	I1008 19:29:18.283483  593060 main.go:141] libmachine: (newest-cni-602180) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.20 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:29:18.283499  593060 main.go:141] libmachine: (newest-cni-602180) DBG | About to run SSH command:
	I1008 19:29:18.283515  593060 main.go:141] libmachine: (newest-cni-602180) DBG | exit 0
	I1008 19:29:18.410280  593060 main.go:141] libmachine: (newest-cni-602180) DBG | SSH cmd err, output: <nil>: 
	I1008 19:29:18.410677  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetConfigRaw
	I1008 19:29:18.411533  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetIP
	I1008 19:29:18.414746  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.415353  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.415397  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.415782  593060 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/config.json ...
	I1008 19:29:18.416006  593060 machine.go:93] provisionDockerMachine start ...
	I1008 19:29:18.416033  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:18.416245  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:18.419109  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.419422  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.419442  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.419669  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:18.419858  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.420005  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.420140  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:18.420295  593060 main.go:141] libmachine: Using SSH client type: native
	I1008 19:29:18.420490  593060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1008 19:29:18.420510  593060 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:29:18.534602  593060 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:29:18.534630  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetMachineName
	I1008 19:29:18.534922  593060 buildroot.go:166] provisioning hostname "newest-cni-602180"
	I1008 19:29:18.534963  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetMachineName
	I1008 19:29:18.535168  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:18.537859  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.538247  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.538285  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.538429  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:18.538655  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.538807  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.538995  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:18.539201  593060 main.go:141] libmachine: Using SSH client type: native
	I1008 19:29:18.539378  593060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1008 19:29:18.539390  593060 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-602180 && echo "newest-cni-602180" | sudo tee /etc/hostname
	I1008 19:29:18.663563  593060 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-602180
	
	I1008 19:29:18.663596  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:18.666604  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.666962  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.666997  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.667218  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:18.667451  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.667650  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.667830  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:18.668025  593060 main.go:141] libmachine: Using SSH client type: native
	I1008 19:29:18.668310  593060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1008 19:29:18.668340  593060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-602180' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-602180/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-602180' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:29:14.142413  592450 addons.go:510] duration metric: took 1.45166107s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1008 19:29:14.206208  592450 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-981259" context rescaled to 1 replicas
	I1008 19:29:15.168688  592450 pod_ready.go:103] pod "coredns-7c65d6cfc9-gpwph" in "kube-system" namespace has status "Ready":"False"
	I1008 19:29:17.169812  592450 pod_ready.go:103] pod "coredns-7c65d6cfc9-gpwph" in "kube-system" namespace has status "Ready":"False"
	I1008 19:29:18.791392  593060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:29:18.791428  593060 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:29:18.791452  593060 buildroot.go:174] setting up certificates
	I1008 19:29:18.791464  593060 provision.go:84] configureAuth start
	I1008 19:29:18.791476  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetMachineName
	I1008 19:29:18.791848  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetIP
	I1008 19:29:18.794266  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.794654  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.794683  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.794863  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:18.797127  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.797489  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.797527  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.797651  593060 provision.go:143] copyHostCerts
	I1008 19:29:18.797720  593060 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:29:18.797734  593060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:29:18.797801  593060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:29:18.797923  593060 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:29:18.797935  593060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:29:18.797964  593060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:29:18.798050  593060 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:29:18.798066  593060 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:29:18.798092  593060 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:29:18.798160  593060 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.newest-cni-602180 san=[127.0.0.1 192.168.39.20 localhost minikube newest-cni-602180]
	I1008 19:29:18.890754  593060 provision.go:177] copyRemoteCerts
	I1008 19:29:18.890828  593060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:29:18.890864  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:18.893729  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.894136  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:18.894170  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:18.894306  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:18.894521  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:18.894713  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:18.894872  593060 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa Username:docker}
	I1008 19:29:18.986033  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:29:19.011211  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:29:19.035275  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:29:19.058779  593060 provision.go:87] duration metric: took 267.299176ms to configureAuth
	I1008 19:29:19.058810  593060 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:29:19.059032  593060 config.go:182] Loaded profile config "newest-cni-602180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:29:19.059126  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:19.062108  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.062531  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:19.062563  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.062677  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:19.062894  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.063072  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.063207  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:19.063374  593060 main.go:141] libmachine: Using SSH client type: native
	I1008 19:29:19.063594  593060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1008 19:29:19.063610  593060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:29:19.305241  593060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:29:19.305275  593060 machine.go:96] duration metric: took 889.251522ms to provisionDockerMachine
	I1008 19:29:19.305291  593060 start.go:293] postStartSetup for "newest-cni-602180" (driver="kvm2")
	I1008 19:29:19.305304  593060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:29:19.305328  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:19.305741  593060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:29:19.305783  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:19.308499  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.308883  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:19.308930  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.309039  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:19.309266  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.309447  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:19.309603  593060 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa Username:docker}
	I1008 19:29:19.406460  593060 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:29:19.411067  593060 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:29:19.411108  593060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:29:19.411178  593060 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:29:19.411297  593060 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:29:19.411419  593060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:29:19.422539  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:29:19.446943  593060 start.go:296] duration metric: took 141.636377ms for postStartSetup
	I1008 19:29:19.446991  593060 fix.go:56] duration metric: took 20.035689697s for fixHost
	I1008 19:29:19.447028  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:19.449666  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.450119  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:19.450151  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.450277  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:19.450500  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.450699  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.450870  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:19.451075  593060 main.go:141] libmachine: Using SSH client type: native
	I1008 19:29:19.451283  593060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I1008 19:29:19.451296  593060 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:29:19.563801  593060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728415759.526441887
	
	I1008 19:29:19.563855  593060 fix.go:216] guest clock: 1728415759.526441887
	I1008 19:29:19.563867  593060 fix.go:229] Guest: 2024-10-08 19:29:19.526441887 +0000 UTC Remote: 2024-10-08 19:29:19.4470071 +0000 UTC m=+20.799250573 (delta=79.434787ms)
	I1008 19:29:19.563913  593060 fix.go:200] guest clock delta is within tolerance: 79.434787ms
	I1008 19:29:19.563922  593060 start.go:83] releasing machines lock for "newest-cni-602180", held for 20.152643788s
	I1008 19:29:19.563953  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:19.564229  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetIP
	I1008 19:29:19.567067  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.567478  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:19.567511  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.567671  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:19.568220  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:19.568438  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:19.568615  593060 ssh_runner.go:195] Run: cat /version.json
	I1008 19:29:19.568622  593060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:29:19.568641  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:19.568675  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:19.571282  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.571593  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.571656  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:19.571681  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.571840  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:19.572030  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.572046  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:19.572068  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:19.572244  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:19.572255  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:19.572434  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:19.572433  593060 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa Username:docker}
	I1008 19:29:19.572589  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:19.572728  593060 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa Username:docker}
	I1008 19:29:19.681751  593060 ssh_runner.go:195] Run: systemctl --version
	I1008 19:29:19.688025  593060 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:29:19.835394  593060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:29:19.841553  593060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:29:19.841645  593060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:29:19.857678  593060 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:29:19.857702  593060 start.go:495] detecting cgroup driver to use...
	I1008 19:29:19.857783  593060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:29:19.873963  593060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:29:19.887740  593060 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:29:19.887792  593060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:29:19.902413  593060 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:29:19.916175  593060 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:29:20.029204  593060 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:29:20.173761  593060 docker.go:233] disabling docker service ...
	I1008 19:29:20.173861  593060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:29:20.188774  593060 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:29:20.201110  593060 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:29:20.342221  593060 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:29:20.467074  593060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:29:20.481999  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:29:20.502112  593060 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:29:20.502173  593060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.513220  593060 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:29:20.513293  593060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.523823  593060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.534238  593060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.544350  593060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:29:20.554659  593060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.564578  593060 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.582917  593060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:29:20.593874  593060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:29:20.602717  593060 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:29:20.602778  593060 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:29:20.617403  593060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:29:20.627556  593060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:29:20.746339  593060 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:29:20.832622  593060 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:29:20.832708  593060 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:29:20.837409  593060 start.go:563] Will wait 60s for crictl version
	I1008 19:29:20.837468  593060 ssh_runner.go:195] Run: which crictl
	I1008 19:29:20.841156  593060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:29:20.884194  593060 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:29:20.884277  593060 ssh_runner.go:195] Run: crio --version
	I1008 19:29:20.913007  593060 ssh_runner.go:195] Run: crio --version
	I1008 19:29:20.942939  593060 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:29:20.944263  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetIP
	I1008 19:29:20.946928  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:20.947231  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:20.947249  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:20.947485  593060 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:29:20.951662  593060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:29:20.965867  593060 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1008 19:29:19.669117  592450 pod_ready.go:98] pod "coredns-7c65d6cfc9-gpwph" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.179 HostIPs:[{IP:192.168.61
.179}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-08 19:29:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-08 19:29:13 +0000 UTC,FinishedAt:2024-10-08 19:29:19 +0000 UTC,ContainerID:cri-o://0bb1f74cb5afffead0e762928c897144b1d3ed2b86e0c920e72c8b3a9072805a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://0bb1f74cb5afffead0e762928c897144b1d3ed2b86e0c920e72c8b3a9072805a Started:0xc000014190 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0006ad660} {Name:kube-api-access-bqsjq MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0006ad680}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1008 19:29:19.669148  592450 pod_ready.go:82] duration metric: took 6.50608191s for pod "coredns-7c65d6cfc9-gpwph" in "kube-system" namespace to be "Ready" ...
	E1008 19:29:19.669167  592450 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-gpwph" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-08 19:29:12 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
1.179 HostIPs:[{IP:192.168.61.179}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-08 19:29:12 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-08 19:29:13 +0000 UTC,FinishedAt:2024-10-08 19:29:19 +0000 UTC,ContainerID:cri-o://0bb1f74cb5afffead0e762928c897144b1d3ed2b86e0c920e72c8b3a9072805a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://0bb1f74cb5afffead0e762928c897144b1d3ed2b86e0c920e72c8b3a9072805a Started:0xc000014190 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0006ad660} {Name:kube-api-access-bqsjq MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0006ad680}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1008 19:29:19.669180  592450 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-jkwgw" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.673664  592450 pod_ready.go:93] pod "coredns-7c65d6cfc9-jkwgw" in "kube-system" namespace has status "Ready":"True"
	I1008 19:29:19.673685  592450 pod_ready.go:82] duration metric: took 4.492294ms for pod "coredns-7c65d6cfc9-jkwgw" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.673696  592450 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.677943  592450 pod_ready.go:93] pod "etcd-auto-981259" in "kube-system" namespace has status "Ready":"True"
	I1008 19:29:19.677964  592450 pod_ready.go:82] duration metric: took 4.260475ms for pod "etcd-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.677975  592450 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.682470  592450 pod_ready.go:93] pod "kube-apiserver-auto-981259" in "kube-system" namespace has status "Ready":"True"
	I1008 19:29:19.682488  592450 pod_ready.go:82] duration metric: took 4.506386ms for pod "kube-apiserver-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.682501  592450 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.686956  592450 pod_ready.go:93] pod "kube-controller-manager-auto-981259" in "kube-system" namespace has status "Ready":"True"
	I1008 19:29:19.686977  592450 pod_ready.go:82] duration metric: took 4.467375ms for pod "kube-controller-manager-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:19.686989  592450 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-pgqjt" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:20.066652  592450 pod_ready.go:93] pod "kube-proxy-pgqjt" in "kube-system" namespace has status "Ready":"True"
	I1008 19:29:20.066687  592450 pod_ready.go:82] duration metric: took 379.688702ms for pod "kube-proxy-pgqjt" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:20.066701  592450 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:20.467360  592450 pod_ready.go:93] pod "kube-scheduler-auto-981259" in "kube-system" namespace has status "Ready":"True"
	I1008 19:29:20.467384  592450 pod_ready.go:82] duration metric: took 400.675522ms for pod "kube-scheduler-auto-981259" in "kube-system" namespace to be "Ready" ...
	I1008 19:29:20.467393  592450 pod_ready.go:39] duration metric: took 7.32060151s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:29:20.467407  592450 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:29:20.467462  592450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:29:20.484696  592450 api_server.go:72] duration metric: took 7.793992305s to wait for apiserver process to appear ...
	I1008 19:29:20.484719  592450 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:29:20.484737  592450 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I1008 19:29:20.489773  592450 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I1008 19:29:20.490869  592450 api_server.go:141] control plane version: v1.31.1
	I1008 19:29:20.490897  592450 api_server.go:131] duration metric: took 6.170042ms to wait for apiserver health ...
	I1008 19:29:20.490907  592450 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:29:20.668780  592450 system_pods.go:59] 7 kube-system pods found
	I1008 19:29:20.668819  592450 system_pods.go:61] "coredns-7c65d6cfc9-jkwgw" [898e4911-3844-481c-9ab8-44a2fd18b2da] Running
	I1008 19:29:20.668827  592450 system_pods.go:61] "etcd-auto-981259" [504162ad-d339-46f6-9b48-da3722693a3c] Running
	I1008 19:29:20.668832  592450 system_pods.go:61] "kube-apiserver-auto-981259" [8020409b-f73c-4d85-973f-cde91e50215c] Running
	I1008 19:29:20.668838  592450 system_pods.go:61] "kube-controller-manager-auto-981259" [88f9d338-4fbd-4791-8c9c-d4cc49a764c9] Running
	I1008 19:29:20.668842  592450 system_pods.go:61] "kube-proxy-pgqjt" [0e91c5a0-49a3-4520-b55e-f9d063add332] Running
	I1008 19:29:20.668847  592450 system_pods.go:61] "kube-scheduler-auto-981259" [0782621f-3e79-4a37-8a8c-65ddd400c7b1] Running
	I1008 19:29:20.668851  592450 system_pods.go:61] "storage-provisioner" [8283f7af-b0a0-45a7-bfec-f5cd347a5c2c] Running
	I1008 19:29:20.668857  592450 system_pods.go:74] duration metric: took 177.944085ms to wait for pod list to return data ...
	I1008 19:29:20.668867  592450 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:29:20.866313  592450 default_sa.go:45] found service account: "default"
	I1008 19:29:20.866353  592450 default_sa.go:55] duration metric: took 197.478161ms for default service account to be created ...
	I1008 19:29:20.866366  592450 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:29:21.069051  592450 system_pods.go:86] 7 kube-system pods found
	I1008 19:29:21.069083  592450 system_pods.go:89] "coredns-7c65d6cfc9-jkwgw" [898e4911-3844-481c-9ab8-44a2fd18b2da] Running
	I1008 19:29:21.069091  592450 system_pods.go:89] "etcd-auto-981259" [504162ad-d339-46f6-9b48-da3722693a3c] Running
	I1008 19:29:21.069097  592450 system_pods.go:89] "kube-apiserver-auto-981259" [8020409b-f73c-4d85-973f-cde91e50215c] Running
	I1008 19:29:21.069102  592450 system_pods.go:89] "kube-controller-manager-auto-981259" [88f9d338-4fbd-4791-8c9c-d4cc49a764c9] Running
	I1008 19:29:21.069107  592450 system_pods.go:89] "kube-proxy-pgqjt" [0e91c5a0-49a3-4520-b55e-f9d063add332] Running
	I1008 19:29:21.069115  592450 system_pods.go:89] "kube-scheduler-auto-981259" [0782621f-3e79-4a37-8a8c-65ddd400c7b1] Running
	I1008 19:29:21.069119  592450 system_pods.go:89] "storage-provisioner" [8283f7af-b0a0-45a7-bfec-f5cd347a5c2c] Running
	I1008 19:29:21.069129  592450 system_pods.go:126] duration metric: took 202.755919ms to wait for k8s-apps to be running ...
	I1008 19:29:21.069139  592450 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:29:21.069191  592450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:29:21.084441  592450 system_svc.go:56] duration metric: took 15.291962ms WaitForService to wait for kubelet
	I1008 19:29:21.084478  592450 kubeadm.go:582] duration metric: took 8.393779614s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:29:21.084505  592450 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:29:21.268446  592450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:29:21.268477  592450 node_conditions.go:123] node cpu capacity is 2
	I1008 19:29:21.268491  592450 node_conditions.go:105] duration metric: took 183.980525ms to run NodePressure ...
	I1008 19:29:21.268505  592450 start.go:241] waiting for startup goroutines ...
	I1008 19:29:21.268523  592450 start.go:246] waiting for cluster config update ...
	I1008 19:29:21.268537  592450 start.go:255] writing updated cluster config ...
	I1008 19:29:21.268854  592450 ssh_runner.go:195] Run: rm -f paused
	I1008 19:29:21.345159  592450 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:29:21.347099  592450 out.go:177] * Done! kubectl is now configured to use "auto-981259" cluster and "default" namespace by default
	I1008 19:29:20.967287  593060 kubeadm.go:883] updating cluster {Name:newest-cni-602180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-602180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:29:20.967449  593060 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:29:20.967529  593060 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:29:21.005792  593060 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:29:21.005857  593060 ssh_runner.go:195] Run: which lz4
	I1008 19:29:21.009925  593060 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:29:21.014182  593060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:29:21.014204  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:29:22.344758  593060 crio.go:462] duration metric: took 1.334858332s to copy over tarball
	I1008 19:29:22.344844  593060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:29:24.602656  593060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.257773377s)
	I1008 19:29:24.602689  593060 crio.go:469] duration metric: took 2.257900001s to extract the tarball
	I1008 19:29:24.602699  593060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:29:24.641079  593060 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:29:24.683579  593060 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:29:24.683612  593060 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:29:24.683623  593060 kubeadm.go:934] updating node { 192.168.39.20 8443 v1.31.1 crio true true} ...
	I1008 19:29:24.683757  593060 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-602180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-602180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:29:24.683860  593060 ssh_runner.go:195] Run: crio config
	I1008 19:29:24.728831  593060 cni.go:84] Creating CNI manager for ""
	I1008 19:29:24.728861  593060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:29:24.728882  593060 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1008 19:29:24.728908  593060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.20 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-602180 NodeName:newest-cni-602180 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.39.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:29:24.729093  593060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-602180"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:29:24.729168  593060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:29:24.741674  593060 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:29:24.741753  593060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:29:24.752224  593060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1008 19:29:24.769410  593060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:29:24.789090  593060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I1008 19:29:24.807749  593060 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I1008 19:29:24.811901  593060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:29:24.824782  593060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:29:24.970929  593060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:29:24.991768  593060 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180 for IP: 192.168.39.20
	I1008 19:29:24.991796  593060 certs.go:194] generating shared ca certs ...
	I1008 19:29:24.991815  593060 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:24.991994  593060 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:29:24.992057  593060 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:29:24.992070  593060 certs.go:256] generating profile certs ...
	I1008 19:29:24.992202  593060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/client.key
	I1008 19:29:24.992276  593060 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/apiserver.key.752d9137
	I1008 19:29:24.992334  593060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/proxy-client.key
	I1008 19:29:24.992496  593060 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:29:24.992553  593060 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:29:24.992566  593060 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:29:24.992603  593060 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:29:24.992636  593060 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:29:24.992666  593060 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:29:24.992718  593060 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:29:24.993727  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:29:25.030727  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:29:25.068539  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:29:25.097940  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:29:25.147546  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:29:25.182502  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:29:25.213373  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:29:25.240564  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:29:25.267056  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:29:25.293390  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:29:25.322287  593060 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:29:25.347877  593060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:29:25.368008  593060 ssh_runner.go:195] Run: openssl version
	I1008 19:29:25.375733  593060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:29:25.389510  593060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:29:25.395152  593060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:29:25.395198  593060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:29:25.402834  593060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:29:25.414639  593060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:29:25.425691  593060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:29:25.430436  593060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:29:25.430520  593060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:29:25.436996  593060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:29:25.451385  593060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:29:25.462658  593060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:29:25.467606  593060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:29:25.467671  593060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:29:25.473806  593060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:29:25.489010  593060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:29:25.493966  593060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:29:25.500367  593060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:29:25.507026  593060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:29:25.513185  593060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:29:25.519744  593060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:29:25.525778  593060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:29:25.532609  593060 kubeadm.go:392] StartCluster: {Name:newest-cni-602180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-602180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:29:25.532733  593060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:29:25.532787  593060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:29:25.576841  593060 cri.go:89] found id: ""
	I1008 19:29:25.576919  593060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:29:25.586990  593060 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:29:25.587010  593060 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:29:25.587072  593060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:29:25.597921  593060 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:29:25.599096  593060 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-602180" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:29:25.599852  593060 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-602180" cluster setting kubeconfig missing "newest-cni-602180" context setting]
	I1008 19:29:25.601216  593060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:25.603055  593060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:29:25.612914  593060 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.20
	I1008 19:29:25.612950  593060 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:29:25.612965  593060 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:29:25.613020  593060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:29:25.662088  593060 cri.go:89] found id: ""
	I1008 19:29:25.662170  593060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:29:25.685821  593060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:29:25.698152  593060 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:29:25.698176  593060 kubeadm.go:157] found existing configuration files:
	
	I1008 19:29:25.698234  593060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:29:25.707385  593060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:29:25.707458  593060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:29:25.717530  593060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:29:25.726839  593060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:29:25.726905  593060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:29:25.736047  593060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:29:25.744967  593060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:29:25.745020  593060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:29:25.754598  593060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:29:25.763780  593060 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:29:25.763858  593060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:29:25.773890  593060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:29:25.783765  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:29:25.908566  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:29:26.775621  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:29:27.060880  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:29:27.159618  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:29:27.238033  593060 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:29:27.238126  593060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:29:27.739159  593060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:29:28.238489  593060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:29:28.337798  593060 api_server.go:72] duration metric: took 1.099762879s to wait for apiserver process to appear ...
	I1008 19:29:28.337842  593060 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:29:28.337869  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:28.338447  593060 api_server.go:269] stopped: https://192.168.39.20:8443/healthz: Get "https://192.168.39.20:8443/healthz": dial tcp 192.168.39.20:8443: connect: connection refused
	I1008 19:29:28.838408  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:31.600600  593060 api_server.go:279] https://192.168.39.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:29:31.600629  593060 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:29:31.600657  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:31.625448  593060 api_server.go:279] https://192.168.39.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:29:31.625489  593060 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:29:31.838779  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:31.843233  593060 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:29:31.843264  593060 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:29:32.338888  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:32.347389  593060 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:29:32.347413  593060 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:29:32.838787  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:32.848287  593060 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:29:32.848329  593060 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:29:33.338746  593060 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1008 19:29:33.348980  593060 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I1008 19:29:33.357094  593060 api_server.go:141] control plane version: v1.31.1
	I1008 19:29:33.357126  593060 api_server.go:131] duration metric: took 5.01927468s to wait for apiserver health ...
	I1008 19:29:33.357138  593060 cni.go:84] Creating CNI manager for ""
	I1008 19:29:33.357146  593060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:29:33.359123  593060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:29:33.360414  593060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:29:33.383336  593060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:29:33.404846  593060 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:29:33.421190  593060 system_pods.go:59] 8 kube-system pods found
	I1008 19:29:33.421235  593060 system_pods.go:61] "coredns-7c65d6cfc9-mxcng" [dda12b2c-e854-4e41-8866-4a59b7cdba44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:29:33.421250  593060 system_pods.go:61] "etcd-newest-cni-602180" [3891fabb-bbf7-4d0b-b361-36e8dd58b56b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:29:33.421262  593060 system_pods.go:61] "kube-apiserver-newest-cni-602180" [bd170b15-ca7b-453d-b56a-17e8551ba5de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:29:33.421278  593060 system_pods.go:61] "kube-controller-manager-newest-cni-602180" [d06f9236-687a-4b4e-a7cd-3673a436014b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:29:33.421291  593060 system_pods.go:61] "kube-proxy-glh47" [bb0f03b3-6c68-4231-a60b-b74da84f1d31] Running
	I1008 19:29:33.421302  593060 system_pods.go:61] "kube-scheduler-newest-cni-602180" [84754912-2b46-46fe-abd6-ade490e97aef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:29:33.421312  593060 system_pods.go:61] "metrics-server-6867b74b74-m2bdp" [399e0d58-1744-4cb9-877f-b489117a7184] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:29:33.421320  593060 system_pods.go:61] "storage-provisioner" [e1fe0a03-3fae-4943-88b3-447a1c4b4f68] Running
	I1008 19:29:33.421328  593060 system_pods.go:74] duration metric: took 16.456187ms to wait for pod list to return data ...
	I1008 19:29:33.421339  593060 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:29:33.426042  593060 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:29:33.426074  593060 node_conditions.go:123] node cpu capacity is 2
	I1008 19:29:33.426088  593060 node_conditions.go:105] duration metric: took 4.74049ms to run NodePressure ...
	I1008 19:29:33.426112  593060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:29:33.704708  593060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:29:33.717126  593060 ops.go:34] apiserver oom_adj: -16
	I1008 19:29:33.717150  593060 kubeadm.go:597] duration metric: took 8.130133206s to restartPrimaryControlPlane
	I1008 19:29:33.717162  593060 kubeadm.go:394] duration metric: took 8.18457432s to StartCluster
	I1008 19:29:33.717189  593060 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:33.717278  593060 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:29:33.719129  593060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:33.719364  593060 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:29:33.719433  593060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:29:33.719553  593060 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-602180"
	I1008 19:29:33.719573  593060 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-602180"
	W1008 19:29:33.719585  593060 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:29:33.719607  593060 addons.go:69] Setting default-storageclass=true in profile "newest-cni-602180"
	I1008 19:29:33.719620  593060 host.go:66] Checking if "newest-cni-602180" exists ...
	I1008 19:29:33.719641  593060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-602180"
	I1008 19:29:33.719627  593060 addons.go:69] Setting metrics-server=true in profile "newest-cni-602180"
	I1008 19:29:33.719666  593060 addons.go:234] Setting addon metrics-server=true in "newest-cni-602180"
	W1008 19:29:33.719676  593060 addons.go:243] addon metrics-server should already be in state true
	I1008 19:29:33.719679  593060 config.go:182] Loaded profile config "newest-cni-602180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:29:33.719716  593060 host.go:66] Checking if "newest-cni-602180" exists ...
	I1008 19:29:33.719727  593060 addons.go:69] Setting dashboard=true in profile "newest-cni-602180"
	I1008 19:29:33.719737  593060 addons.go:234] Setting addon dashboard=true in "newest-cni-602180"
	W1008 19:29:33.719742  593060 addons.go:243] addon dashboard should already be in state true
	I1008 19:29:33.719760  593060 host.go:66] Checking if "newest-cni-602180" exists ...
	I1008 19:29:33.720068  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.720071  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.720098  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.720104  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.720110  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.720128  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.720141  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.720176  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.721004  593060 out.go:177] * Verifying Kubernetes components...
	I1008 19:29:33.722376  593060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:29:33.736434  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I1008 19:29:33.736471  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I1008 19:29:33.736893  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.736893  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.737469  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.737472  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.737527  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.737492  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.737951  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.737999  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.738226  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetState
	I1008 19:29:33.738638  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.738686  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.739567  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1008 19:29:33.740126  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.740603  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.740626  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.740948  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.741447  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.741488  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.742083  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I1008 19:29:33.742288  593060 addons.go:234] Setting addon default-storageclass=true in "newest-cni-602180"
	W1008 19:29:33.742309  593060 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:29:33.742373  593060 host.go:66] Checking if "newest-cni-602180" exists ...
	I1008 19:29:33.742526  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.742718  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.742763  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.742958  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.742974  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.743724  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.744323  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.744360  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.757494  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34401
	I1008 19:29:33.759564  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I1008 19:29:33.760158  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I1008 19:29:33.770808  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.770894  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.770925  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.771444  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.771451  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.771476  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.771507  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.771658  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.771676  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.771889  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.771891  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.772025  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.772125  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetState
	I1008 19:29:33.772227  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetState
	I1008 19:29:33.772558  593060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:29:33.772593  593060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:29:33.774598  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:33.775035  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:33.776518  593060 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 19:29:33.776529  593060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:29:33.777678  593060 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1008 19:29:33.777688  593060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:29:33.777701  593060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:29:33.777739  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:33.779399  593060 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 19:29:33.779414  593060 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 19:29:33.779429  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHHostname
	I1008 19:29:33.780495  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:33.780894  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:33.780916  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:33.782104  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:33.782250  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:33.782412  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:33.782569  593060 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa Username:docker}
	I1008 19:29:33.784379  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:33.784830  593060 main.go:141] libmachine: (newest-cni-602180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:06:67", ip: ""} in network mk-newest-cni-602180: {Iface:virbr3 ExpiryTime:2024-10-08 20:29:11 +0000 UTC Type:0 Mac:52:54:00:e7:06:67 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:newest-cni-602180 Clientid:01:52:54:00:e7:06:67}
	I1008 19:29:33.784849  593060 main.go:141] libmachine: (newest-cni-602180) DBG | domain newest-cni-602180 has defined IP address 192.168.39.20 and MAC address 52:54:00:e7:06:67 in network mk-newest-cni-602180
	I1008 19:29:33.785098  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHPort
	I1008 19:29:33.785267  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHKeyPath
	I1008 19:29:33.785416  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetSSHUsername
	I1008 19:29:33.785548  593060 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/newest-cni-602180/id_rsa Username:docker}
	I1008 19:29:33.789201  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I1008 19:29:33.789606  593060 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:29:33.789932  593060 main.go:141] libmachine: Using API Version  1
	I1008 19:29:33.789946  593060 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:29:33.790265  593060 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:29:33.790459  593060 main.go:141] libmachine: (newest-cni-602180) Calling .GetState
	I1008 19:29:33.792031  593060 main.go:141] libmachine: (newest-cni-602180) Calling .DriverName
	I1008 19:29:33.793601  593060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I1008 19:29:33.793796  593060 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.721355511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415776721324969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=325ea8be-09fc-4a9e-b566-d1c4ab89972f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.721949889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f68bcb25-9e29-42ae-bc51-9bee7cc0c934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.722045558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f68bcb25-9e29-42ae-bc51-9bee7cc0c934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.722317368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f68bcb25-9e29-42ae-bc51-9bee7cc0c934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.771617390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85c74871-1030-4ec3-b9f4-75fd645670eb name=/runtime.v1.RuntimeService/Version
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.771711778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85c74871-1030-4ec3-b9f4-75fd645670eb name=/runtime.v1.RuntimeService/Version
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.773656243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96822af6-5b18-4cf5-866f-9b53c24cf93d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.774050424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415776774021212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96822af6-5b18-4cf5-866f-9b53c24cf93d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.774860754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d9cef16-1746-470a-854e-ba26644a6801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.774914293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d9cef16-1746-470a-854e-ba26644a6801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.775149006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d9cef16-1746-470a-854e-ba26644a6801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.819526444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08aa2f10-8660-4f35-aee4-227cb23a6e86 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.819652221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08aa2f10-8660-4f35-aee4-227cb23a6e86 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.823046864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69705430-04d6-4840-9eda-922ea3a8f876 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.823715983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415776823674965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69705430-04d6-4840-9eda-922ea3a8f876 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.824820367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc2f3da9-7523-47c5-b80d-2cac000699df name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.824914081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc2f3da9-7523-47c5-b80d-2cac000699df name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.825183608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc2f3da9-7523-47c5-b80d-2cac000699df name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.866318883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4c069c4-0e48-455d-864f-8567248aa7a6 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.866439588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4c069c4-0e48-455d-864f-8567248aa7a6 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.867840103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=808b94ed-7daf-4771-990e-c9017781c9ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.868232192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415776868211462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808b94ed-7daf-4771-990e-c9017781c9ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.868871170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932aaccb-2edf-47a4-82fc-a662c795f42d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.868964889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932aaccb-2edf-47a4-82fc-a662c795f42d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:29:36 embed-certs-783146 crio[694]: time="2024-10-08 19:29:36.869240973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414485763709169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd99acb7c4645a9efb68d915498acfde6ee4bf19ada3dc2a5907dbd5ee47df3,PodSandboxId:3769c1b3d855deb09e06a44f2657578815781593087fbc706f3cae5a46cdceb0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414463836770379,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5,PodSandboxId:8cdd040a91ddc7650945dd15417abf4726a4ed6f8d15a78f35f506d28524cca5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414461617994302,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kh9nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fcd8158-57cf-4f5e-9be7-55c1107bf3b0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2,PodSandboxId:b0868e02645b7da89ac588adacd8c7e2db9dfc147ff101cf7b67c7e99cef0f86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414454885696477,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9l7t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20a17c15-0fd2-40e8-b
42a-ce35d2fbdf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda,PodSandboxId:f738edcebb0b9bd91fc4147e54bde26ec3ddd20cae3badaa2fca495c6dfa2abb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414454877744875,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad6a8a6-5f69-4323-b540-2f8d330d8
d84,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f,PodSandboxId:adcb3c5a432afa7c2e6f73f9926d698af532284e688c65cec37fbeea96a3ee5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414450226334223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b11e70ade621b4409a16d9ac18a734,},Annota
tions:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674,PodSandboxId:beda36eaf3c3ea0ffca88020e89842b07579b9eaef227c7b26e5681c554ee799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414450238669002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57b4da06010d0f3489a51e057e
14ecd8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f,PodSandboxId:947ba3da483b32df51bc1e95592e329d232eae5fd48a159b62047ef2b40f1b52,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414450204632680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4e7ef45f15d8d483fe00339800dc812,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f,PodSandboxId:391c64cf760e98be3b38dd90d00fc78c4c2907dadeedf4a93d13f70b3aaab398,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414450208503336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-783146,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4063985ffee1796af14cc67de0ba713a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=932aaccb-2edf-47a4-82fc-a662c795f42d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e05aeedd245a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       3                   f738edcebb0b9       storage-provisioner
	8fd99acb7c464       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   3769c1b3d855d       busybox
	b4aceabf5c4e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   8cdd040a91ddc       coredns-7c65d6cfc9-kh9nk
	44cb46dbe3fe0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      22 minutes ago      Running             kube-proxy                1                   b0868e02645b7       kube-proxy-9l7t7
	ffa903de853fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       2                   f738edcebb0b9       storage-provisioner
	2a7606685755c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      22 minutes ago      Running             kube-controller-manager   1                   beda36eaf3c3e       kube-controller-manager-embed-certs-783146
	639ce8bca3484       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      22 minutes ago      Running             kube-scheduler            1                   adcb3c5a432af       kube-scheduler-embed-certs-783146
	8355c440ac929       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      22 minutes ago      Running             kube-apiserver            1                   391c64cf760e9       kube-apiserver-embed-certs-783146
	ef34a632006c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   947ba3da483b3       etcd-embed-certs-783146
	
	
	==> coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55053 - 58221 "HINFO IN 6943118436927033031.900514570035518152. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017688127s
	
	
	==> describe nodes <==
	Name:               embed-certs-783146
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-783146
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=embed-certs-783146
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T19_00_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 19:00:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-783146
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 19:29:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 19:28:28 +0000   Tue, 08 Oct 2024 19:00:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 19:28:28 +0000   Tue, 08 Oct 2024 19:00:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 19:28:28 +0000   Tue, 08 Oct 2024 19:00:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 19:28:28 +0000   Tue, 08 Oct 2024 19:07:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.183
	  Hostname:    embed-certs-783146
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bba105b17a9417f8d6ef151a389204d
	  System UUID:                0bba105b-17a9-417f-8d6e-f151a389204d
	  Boot ID:                    9643f9ed-a128-450c-a636-5c655cbc3124
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-kh9nk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-783146                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-783146             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-783146    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-9l7t7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-783146             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-4d48d               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-783146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-783146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-783146 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-783146 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-783146 event: Registered Node embed-certs-783146 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-783146 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-783146 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-783146 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-783146 event: Registered Node embed-certs-783146 in Controller
	
	
	==> dmesg <==
	[Oct 8 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.817083] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.441179] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.490071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.395884] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.054279] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051661] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.206186] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.122442] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.292595] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +4.003135] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +2.202305] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.077030] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.417793] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.576030] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +4.342643] kauditd_printk_skb: 80 callbacks suppressed
	[Oct 8 19:08] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] <==
	{"level":"info","ts":"2024-10-08T19:08:10.229573Z","caller":"traceutil/trace.go:171","msg":"trace[601122315] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:611; }","duration":"426.397424ms","start":"2024-10-08T19:08:09.803168Z","end":"2024-10-08T19:08:10.229565Z","steps":["trace[601122315] 'agreement among raft nodes before linearized reading'  (duration: 426.271977ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:08:10.229692Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.075712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-4d48d\" ","response":"range_response_count:1 size:4386"}
	{"level":"info","ts":"2024-10-08T19:08:10.229832Z","caller":"traceutil/trace.go:171","msg":"trace[1316531411] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-4d48d; range_end:; response_count:1; response_revision:611; }","duration":"295.14926ms","start":"2024-10-08T19:08:09.934591Z","end":"2024-10-08T19:08:10.229741Z","steps":["trace[1316531411] 'agreement among raft nodes before linearized reading'  (duration: 294.926192ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:17:32.264097Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":835}
	{"level":"info","ts":"2024-10-08T19:17:32.273891Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":835,"took":"9.181302ms","hash":2078501744,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2662400,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-08T19:17:32.273975Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2078501744,"revision":835,"compact-revision":-1}
	{"level":"info","ts":"2024-10-08T19:22:32.271851Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1078}
	{"level":"info","ts":"2024-10-08T19:22:32.275973Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1078,"took":"3.831943ms","hash":870592062,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-08T19:22:32.276028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":870592062,"revision":1078,"compact-revision":835}
	{"level":"info","ts":"2024-10-08T19:27:32.279638Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1322}
	{"level":"info","ts":"2024-10-08T19:27:32.283558Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1322,"took":"3.290368ms","hash":4137875858,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-08T19:27:32.283637Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4137875858,"revision":1322,"compact-revision":1078}
	{"level":"info","ts":"2024-10-08T19:28:15.158619Z","caller":"traceutil/trace.go:171","msg":"trace[2103565273] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"185.033569ms","start":"2024-10-08T19:28:14.973263Z","end":"2024-10-08T19:28:15.158296Z","steps":["trace[2103565273] 'process raft request'  (duration: 184.885739ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:28:33.355765Z","caller":"traceutil/trace.go:171","msg":"trace[1255534047] transaction","detail":"{read_only:false; response_revision:1615; number_of_response:1; }","duration":"113.670133ms","start":"2024-10-08T19:28:33.242077Z","end":"2024-10-08T19:28:33.355747Z","steps":["trace[1255534047] 'process raft request'  (duration: 113.535829ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:28:57.664279Z","caller":"traceutil/trace.go:171","msg":"trace[463673957] transaction","detail":"{read_only:false; response_revision:1635; number_of_response:1; }","duration":"189.655018ms","start":"2024-10-08T19:28:57.474603Z","end":"2024-10-08T19:28:57.664258Z","steps":["trace[463673957] 'process raft request'  (duration: 189.504366ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:28:57.881267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.223611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:28:57.881405Z","caller":"traceutil/trace.go:171","msg":"trace[196284486] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1635; }","duration":"138.436751ms","start":"2024-10-08T19:28:57.742953Z","end":"2024-10-08T19:28:57.881389Z","steps":["trace[196284486] 'range keys from in-memory index tree'  (duration: 138.088986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:28:58.471018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.334937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:28:58.471379Z","caller":"traceutil/trace.go:171","msg":"trace[1052128403] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1636; }","duration":"174.737247ms","start":"2024-10-08T19:28:58.296565Z","end":"2024-10-08T19:28:58.471303Z","steps":["trace[1052128403] 'count revisions from in-memory index tree'  (duration: 174.273923ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:29:25.966350Z","caller":"traceutil/trace.go:171","msg":"trace[639934387] linearizableReadLoop","detail":"{readStateIndex:1966; appliedIndex:1965; }","duration":"137.522968ms","start":"2024-10-08T19:29:25.828806Z","end":"2024-10-08T19:29:25.966329Z","steps":["trace[639934387] 'read index received'  (duration: 137.365272ms)","trace[639934387] 'applied index is now lower than readState.Index'  (duration: 157.307µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-08T19:29:25.966540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.684305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:29:25.966614Z","caller":"traceutil/trace.go:171","msg":"trace[502058172] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1659; }","duration":"137.806147ms","start":"2024-10-08T19:29:25.828802Z","end":"2024-10-08T19:29:25.966608Z","steps":["trace[502058172] 'agreement among raft nodes before linearized reading'  (duration: 137.622472ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:29:25.966732Z","caller":"traceutil/trace.go:171","msg":"trace[1012460231] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"150.684665ms","start":"2024-10-08T19:29:25.816032Z","end":"2024-10-08T19:29:25.966717Z","steps":["trace[1012460231] 'process raft request'  (duration: 150.207241ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:29:26.226829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.798833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-08T19:29:26.227320Z","caller":"traceutil/trace.go:171","msg":"trace[967605124] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1659; }","duration":"140.300034ms","start":"2024-10-08T19:29:26.087003Z","end":"2024-10-08T19:29:26.227303Z","steps":["trace[967605124] 'count revisions from in-memory index tree'  (duration: 139.751108ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:29:37 up 22 min,  0 users,  load average: 0.35, 0.16, 0.11
	Linux embed-certs-783146 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] <==
	I1008 19:25:34.525825       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:25:34.525922       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:27:33.526207       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:27:33.526352       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1008 19:27:34.528120       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:27:34.528314       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:27:34.528247       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:27:34.528504       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:27:34.529656       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:27:34.529717       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:28:34.530171       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:28:34.530259       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:28:34.530205       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:28:34.530330       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:28:34.531631       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:28:34.531698       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] <==
	I1008 19:24:07.781912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:24:37.253537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:24:37.792008       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:25:07.259421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:25:07.799225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:25:37.265041       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:25:37.807509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:26:07.271269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:26:07.815742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:26:37.278261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:26:37.823801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:27:07.284439       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:27:07.831096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:27:37.290025       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:27:37.841880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:28:07.296717       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:28:07.849959       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:28:28.313834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-783146"
	E1008 19:28:37.305669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:28:37.860143       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:28:51.555627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="282.077µs"
	I1008 19:29:05.552802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="58.803µs"
	E1008 19:29:07.314361       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:29:07.869694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:29:37.327778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 19:07:35.112970       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 19:07:35.126657       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.183"]
	E1008 19:07:35.126854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 19:07:35.156909       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 19:07:35.156960       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 19:07:35.156983       1 server_linux.go:169] "Using iptables Proxier"
	I1008 19:07:35.159283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 19:07:35.159617       1 server.go:483] "Version info" version="v1.31.1"
	I1008 19:07:35.159642       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:07:35.161200       1 config.go:199] "Starting service config controller"
	I1008 19:07:35.161238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 19:07:35.161256       1 config.go:105] "Starting endpoint slice config controller"
	I1008 19:07:35.161259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 19:07:35.161626       1 config.go:328] "Starting node config controller"
	I1008 19:07:35.161656       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 19:07:35.261722       1 shared_informer.go:320] Caches are synced for node config
	I1008 19:07:35.261807       1 shared_informer.go:320] Caches are synced for service config
	I1008 19:07:35.261817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] <==
	I1008 19:07:31.582352       1 serving.go:386] Generated self-signed cert in-memory
	W1008 19:07:33.452122       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 19:07:33.452165       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 19:07:33.452178       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 19:07:33.452187       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 19:07:33.524092       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1008 19:07:33.524274       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:07:33.527950       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 19:07:33.528007       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 19:07:33.528432       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 19:07:33.528674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 19:07:33.628966       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 19:28:36 embed-certs-783146 kubelet[901]: E1008 19:28:36.553304     901 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 08 19:28:36 embed-certs-783146 kubelet[901]: E1008 19:28:36.553402     901 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 08 19:28:36 embed-certs-783146 kubelet[901]: E1008 19:28:36.553952     901 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9jzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-4d48d_kube-system(7d305dc9-31d0-482b-8b3e-82be14daeaf0): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 08 19:28:36 embed-certs-783146 kubelet[901]: E1008 19:28:36.555225     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:28:38 embed-certs-783146 kubelet[901]: E1008 19:28:38.837824     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415718837005271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:28:38 embed-certs-783146 kubelet[901]: E1008 19:28:38.838388     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415718837005271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:28:48 embed-certs-783146 kubelet[901]: E1008 19:28:48.840694     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415728840198829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:28:48 embed-certs-783146 kubelet[901]: E1008 19:28:48.841115     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415728840198829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:28:51 embed-certs-783146 kubelet[901]: E1008 19:28:51.538492     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:28:58 embed-certs-783146 kubelet[901]: E1008 19:28:58.843510     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415738842998864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:28:58 embed-certs-783146 kubelet[901]: E1008 19:28:58.843553     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415738842998864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:05 embed-certs-783146 kubelet[901]: E1008 19:29:05.537963     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:29:08 embed-certs-783146 kubelet[901]: E1008 19:29:08.845746     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415748845305102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:08 embed-certs-783146 kubelet[901]: E1008 19:29:08.845795     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415748845305102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:16 embed-certs-783146 kubelet[901]: E1008 19:29:16.537720     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	Oct 08 19:29:18 embed-certs-783146 kubelet[901]: E1008 19:29:18.848478     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415758847989449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:18 embed-certs-783146 kubelet[901]: E1008 19:29:18.848507     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415758847989449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]: E1008 19:29:28.564859     901 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]: E1008 19:29:28.850081     901 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415768849735729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:28 embed-certs-783146 kubelet[901]: E1008 19:29:28.850106     901 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415768849735729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:29 embed-certs-783146 kubelet[901]: E1008 19:29:29.537322     901 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4d48d" podUID="7d305dc9-31d0-482b-8b3e-82be14daeaf0"
	
	
	==> storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] <==
	I1008 19:08:05.864350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 19:08:05.882900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 19:08:05.882997       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 19:08:23.284848       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 19:08:23.285054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-783146_577336a5-f01e-431e-a81b-e9bab9aca163!
	I1008 19:08:23.286789       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2457038-be1d-43b0-881b-88857d3f7f63", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-783146_577336a5-f01e-431e-a81b-e9bab9aca163 became leader
	I1008 19:08:23.385991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-783146_577336a5-f01e-431e-a81b-e9bab9aca163!
	
	
	==> storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] <==
	I1008 19:07:35.042028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 19:08:05.045506       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783146 -n embed-certs-783146
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-783146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4d48d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-783146 describe pod metrics-server-6867b74b74-4d48d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-783146 describe pod metrics-server-6867b74b74-4d48d: exit status 1 (86.825961ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4d48d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-783146 describe pod metrics-server-6867b74b74-4d48d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (488.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (491.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-08 19:30:14.825171833 +0000 UTC m=+7008.314324012
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-142496 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.376µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-142496 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-142496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-142496 logs -n 25: (1.516838777s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| start   | -p calico-981259 --memory=3072                       | calico-981259         | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo journalctl                       | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo docker                           | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo                                  | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo cat                              | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo containerd                       | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo systemctl                        | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo find                             | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-981259 sudo crio                             | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-981259                                       | auto-981259           | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC | 08 Oct 24 19:29 UTC |
	| start   | -p custom-flannel-981259                             | custom-flannel-981259 | jenkins | v1.34.0 | 08 Oct 24 19:29 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:29:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:29:49.571405  595471 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:29:49.571648  595471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:29:49.571660  595471 out.go:358] Setting ErrFile to fd 2...
	I1008 19:29:49.571667  595471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:29:49.571953  595471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:29:49.572530  595471 out.go:352] Setting JSON to false
	I1008 19:29:49.573652  595471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11542,"bootTime":1728404248,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:29:49.573759  595471 start.go:139] virtualization: kvm guest
	I1008 19:29:49.575864  595471 out.go:177] * [custom-flannel-981259] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:29:49.577003  595471 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:29:49.577089  595471 notify.go:220] Checking for updates...
	I1008 19:29:49.578828  595471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:29:49.579753  595471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:29:49.580745  595471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:29:49.581669  595471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:29:49.582563  595471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:29:49.583853  595471 config.go:182] Loaded profile config "calico-981259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:29:49.583985  595471 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:29:49.584080  595471 config.go:182] Loaded profile config "kindnet-981259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:29:49.584170  595471 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:29:49.621465  595471 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 19:29:49.622512  595471 start.go:297] selected driver: kvm2
	I1008 19:29:49.622529  595471 start.go:901] validating driver "kvm2" against <nil>
	I1008 19:29:49.622545  595471 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:29:49.623265  595471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:29:49.623385  595471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:29:49.640610  595471 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:29:49.640657  595471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 19:29:49.640995  595471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:29:49.641030  595471 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1008 19:29:49.641045  595471 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1008 19:29:49.641105  595471 start.go:340] cluster config:
	{Name:custom-flannel-981259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-981259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:29:49.641234  595471 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:29:49.642903  595471 out.go:177] * Starting "custom-flannel-981259" primary control-plane node in "custom-flannel-981259" cluster
	I1008 19:29:50.875604  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:29:50.876150  593901 main.go:141] libmachine: (kindnet-981259) DBG | unable to find current IP address of domain kindnet-981259 in network mk-kindnet-981259
	I1008 19:29:50.876191  593901 main.go:141] libmachine: (kindnet-981259) DBG | I1008 19:29:50.876090  593941 retry.go:31] will retry after 1.923872194s: waiting for machine to come up
	I1008 19:29:52.802164  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:29:52.802703  593901 main.go:141] libmachine: (kindnet-981259) DBG | unable to find current IP address of domain kindnet-981259 in network mk-kindnet-981259
	I1008 19:29:52.802727  593901 main.go:141] libmachine: (kindnet-981259) DBG | I1008 19:29:52.802664  593941 retry.go:31] will retry after 2.611901589s: waiting for machine to come up
	I1008 19:29:49.644132  595471 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:29:49.644185  595471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 19:29:49.644197  595471 cache.go:56] Caching tarball of preloaded images
	I1008 19:29:49.644311  595471 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:29:49.644329  595471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 19:29:49.644464  595471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/custom-flannel-981259/config.json ...
	I1008 19:29:49.644491  595471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/custom-flannel-981259/config.json: {Name:mk6afa50e1ccb7c45dc5098bb4ef05024217e94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:29:49.644687  595471 start.go:360] acquireMachinesLock for custom-flannel-981259: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:29:55.416674  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:29:55.417127  593901 main.go:141] libmachine: (kindnet-981259) DBG | unable to find current IP address of domain kindnet-981259 in network mk-kindnet-981259
	I1008 19:29:55.417168  593901 main.go:141] libmachine: (kindnet-981259) DBG | I1008 19:29:55.417093  593941 retry.go:31] will retry after 2.862442135s: waiting for machine to come up
	I1008 19:29:58.283037  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:29:58.283407  593901 main.go:141] libmachine: (kindnet-981259) DBG | unable to find current IP address of domain kindnet-981259 in network mk-kindnet-981259
	I1008 19:29:58.283430  593901 main.go:141] libmachine: (kindnet-981259) DBG | I1008 19:29:58.283370  593941 retry.go:31] will retry after 4.893394057s: waiting for machine to come up
	I1008 19:30:03.180963  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.181358  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has current primary IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.181381  593901 main.go:141] libmachine: (kindnet-981259) Found IP for machine: 192.168.72.93
	I1008 19:30:03.181394  593901 main.go:141] libmachine: (kindnet-981259) Reserving static IP address...
	I1008 19:30:03.181818  593901 main.go:141] libmachine: (kindnet-981259) DBG | unable to find host DHCP lease matching {name: "kindnet-981259", mac: "52:54:00:ab:b2:c4", ip: "192.168.72.93"} in network mk-kindnet-981259
	I1008 19:30:03.255487  593901 main.go:141] libmachine: (kindnet-981259) Reserved static IP address: 192.168.72.93
	I1008 19:30:03.255518  593901 main.go:141] libmachine: (kindnet-981259) Waiting for SSH to be available...
	I1008 19:30:03.255526  593901 main.go:141] libmachine: (kindnet-981259) DBG | Getting to WaitForSSH function...
	I1008 19:30:03.258224  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.258713  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.258741  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.258850  593901 main.go:141] libmachine: (kindnet-981259) DBG | Using SSH client type: external
	I1008 19:30:03.258875  593901 main.go:141] libmachine: (kindnet-981259) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kindnet-981259/id_rsa (-rw-------)
	I1008 19:30:03.258911  593901 main.go:141] libmachine: (kindnet-981259) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/kindnet-981259/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:30:03.258921  593901 main.go:141] libmachine: (kindnet-981259) DBG | About to run SSH command:
	I1008 19:30:03.258932  593901 main.go:141] libmachine: (kindnet-981259) DBG | exit 0
	I1008 19:30:03.386235  593901 main.go:141] libmachine: (kindnet-981259) DBG | SSH cmd err, output: <nil>: 
	I1008 19:30:03.386544  593901 main.go:141] libmachine: (kindnet-981259) KVM machine creation complete!
	I1008 19:30:03.386918  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetConfigRaw
	I1008 19:30:03.387489  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:03.387697  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:03.387892  593901 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 19:30:03.387906  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetState
	I1008 19:30:03.389323  593901 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 19:30:03.389337  593901 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 19:30:03.389343  593901 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 19:30:03.389348  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:03.391617  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.391990  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.392021  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.392172  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:03.392357  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.392543  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.392698  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:03.392893  593901 main.go:141] libmachine: Using SSH client type: native
	I1008 19:30:03.393082  593901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.93 22 <nil> <nil>}
	I1008 19:30:03.393092  593901 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 19:30:03.497462  593901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:30:03.497488  593901 main.go:141] libmachine: Detecting the provisioner...
	I1008 19:30:03.497506  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:03.500372  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.500734  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.500763  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.500887  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:03.501081  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.501265  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.501402  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:03.501567  593901 main.go:141] libmachine: Using SSH client type: native
	I1008 19:30:03.501793  593901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.93 22 <nil> <nil>}
	I1008 19:30:03.501807  593901 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 19:30:03.606946  593901 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1008 19:30:03.607053  593901 main.go:141] libmachine: found compatible host: buildroot
	I1008 19:30:03.607068  593901 main.go:141] libmachine: Provisioning with buildroot...
	I1008 19:30:03.607076  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetMachineName
	I1008 19:30:03.607355  593901 buildroot.go:166] provisioning hostname "kindnet-981259"
	I1008 19:30:03.607389  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetMachineName
	I1008 19:30:03.607621  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:03.610394  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.610744  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.610771  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.610926  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:03.611089  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.611237  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.611429  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:03.611611  593901 main.go:141] libmachine: Using SSH client type: native
	I1008 19:30:03.611846  593901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.93 22 <nil> <nil>}
	I1008 19:30:03.611870  593901 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-981259 && echo "kindnet-981259" | sudo tee /etc/hostname
	I1008 19:30:03.731653  593901 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-981259
	
	I1008 19:30:03.731683  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:03.734243  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.734597  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.734625  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.734759  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:03.734922  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.735065  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:03.735182  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:03.735336  593901 main.go:141] libmachine: Using SSH client type: native
	I1008 19:30:03.735520  593901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.93 22 <nil> <nil>}
	I1008 19:30:03.735535  593901 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-981259' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-981259/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-981259' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:30:03.850661  593901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:30:03.850698  593901 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:30:03.850736  593901 buildroot.go:174] setting up certificates
	I1008 19:30:03.850755  593901 provision.go:84] configureAuth start
	I1008 19:30:03.850769  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetMachineName
	I1008 19:30:03.851046  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetIP
	I1008 19:30:03.853546  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.853932  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.853955  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.854078  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:03.856105  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.856452  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:03.856480  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:03.856589  593901 provision.go:143] copyHostCerts
	I1008 19:30:03.856676  593901 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:30:03.856690  593901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:30:03.856769  593901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:30:03.856923  593901 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:30:03.856936  593901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:30:03.856972  593901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:30:03.857053  593901 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:30:03.857064  593901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:30:03.857104  593901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:30:03.857169  593901 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.kindnet-981259 san=[127.0.0.1 192.168.72.93 kindnet-981259 localhost minikube]
	I1008 19:30:04.175822  593901 provision.go:177] copyRemoteCerts
	I1008 19:30:04.175890  593901 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:30:04.175918  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:04.178745  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.179104  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.179137  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.179370  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:04.179561  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.179830  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:04.180014  593901 sshutil.go:53] new ssh client: &{IP:192.168.72.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kindnet-981259/id_rsa Username:docker}
	I1008 19:30:04.264451  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:30:04.287992  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1008 19:30:04.311124  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:30:04.335497  593901 provision.go:87] duration metric: took 484.72311ms to configureAuth
	I1008 19:30:04.335532  593901 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:30:04.335773  593901 config.go:182] Loaded profile config "kindnet-981259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:30:04.335877  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:04.338431  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.338825  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.338848  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.339065  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:04.339284  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.339458  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.339605  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:04.339771  593901 main.go:141] libmachine: Using SSH client type: native
	I1008 19:30:04.339955  593901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.93 22 <nil> <nil>}
	I1008 19:30:04.339969  593901 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:30:04.818977  594684 start.go:364] duration metric: took 21.201372545s to acquireMachinesLock for "calico-981259"
	I1008 19:30:04.819057  594684 start.go:93] Provisioning new machine with config: &{Name:calico-981259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:calico-981259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:30:04.819210  594684 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 19:30:04.577644  593901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:30:04.577671  593901 main.go:141] libmachine: Checking connection to Docker...
	I1008 19:30:04.577679  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetURL
	I1008 19:30:04.579045  593901 main.go:141] libmachine: (kindnet-981259) DBG | Using libvirt version 6000000
	I1008 19:30:04.581124  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.581405  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.581428  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.581564  593901 main.go:141] libmachine: Docker is up and running!
	I1008 19:30:04.581579  593901 main.go:141] libmachine: Reticulating splines...
	I1008 19:30:04.581589  593901 client.go:171] duration metric: took 25.017683514s to LocalClient.Create
	I1008 19:30:04.581624  593901 start.go:167] duration metric: took 25.017766656s to libmachine.API.Create "kindnet-981259"
	I1008 19:30:04.581637  593901 start.go:293] postStartSetup for "kindnet-981259" (driver="kvm2")
	I1008 19:30:04.581650  593901 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:30:04.581668  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:04.581903  593901 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:30:04.581925  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:04.583783  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.584116  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.584153  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.584345  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:04.584526  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.584726  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:04.584881  593901 sshutil.go:53] new ssh client: &{IP:192.168.72.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kindnet-981259/id_rsa Username:docker}
	I1008 19:30:04.668048  593901 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:30:04.672180  593901 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:30:04.672210  593901 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:30:04.672271  593901 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:30:04.672342  593901 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:30:04.672433  593901 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:30:04.681335  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:30:04.704424  593901 start.go:296] duration metric: took 122.760591ms for postStartSetup
	I1008 19:30:04.704475  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetConfigRaw
	I1008 19:30:04.705080  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetIP
	I1008 19:30:04.707728  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.708090  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.708120  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.708348  593901 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/config.json ...
	I1008 19:30:04.708552  593901 start.go:128] duration metric: took 25.164385275s to createHost
	I1008 19:30:04.708580  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:04.711946  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.712321  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.712360  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.712458  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:04.712772  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.712930  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.713085  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:04.713269  593901 main.go:141] libmachine: Using SSH client type: native
	I1008 19:30:04.713450  593901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.93 22 <nil> <nil>}
	I1008 19:30:04.713469  593901 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:30:04.818804  593901 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728415804.775842172
	
	I1008 19:30:04.818827  593901 fix.go:216] guest clock: 1728415804.775842172
	I1008 19:30:04.818835  593901 fix.go:229] Guest: 2024-10-08 19:30:04.775842172 +0000 UTC Remote: 2024-10-08 19:30:04.708567973 +0000 UTC m=+25.305158582 (delta=67.274199ms)
	I1008 19:30:04.818867  593901 fix.go:200] guest clock delta is within tolerance: 67.274199ms
	I1008 19:30:04.818873  593901 start.go:83] releasing machines lock for "kindnet-981259", held for 25.274789905s
	I1008 19:30:04.818900  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:04.819181  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetIP
	I1008 19:30:04.821967  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.822469  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.822493  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.822736  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:04.823285  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:04.823475  593901 main.go:141] libmachine: (kindnet-981259) Calling .DriverName
	I1008 19:30:04.823615  593901 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:30:04.823666  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:04.823690  593901 ssh_runner.go:195] Run: cat /version.json
	I1008 19:30:04.823715  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHHostname
	I1008 19:30:04.826621  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.826645  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.826974  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.827000  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.827024  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:04.827039  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:04.827173  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:04.827193  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHPort
	I1008 19:30:04.827379  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.827379  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHKeyPath
	I1008 19:30:04.827531  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:04.827535  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetSSHUsername
	I1008 19:30:04.827783  593901 sshutil.go:53] new ssh client: &{IP:192.168.72.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kindnet-981259/id_rsa Username:docker}
	I1008 19:30:04.827786  593901 sshutil.go:53] new ssh client: &{IP:192.168.72.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/kindnet-981259/id_rsa Username:docker}
	I1008 19:30:04.934512  593901 ssh_runner.go:195] Run: systemctl --version
	I1008 19:30:04.940752  593901 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:30:05.100214  593901 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:30:05.106433  593901 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:30:05.106516  593901 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:30:05.123473  593901 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:30:05.123497  593901 start.go:495] detecting cgroup driver to use...
	I1008 19:30:05.123715  593901 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:30:05.139596  593901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:30:05.152708  593901 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:30:05.152759  593901 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:30:05.167011  593901 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:30:05.180624  593901 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:30:05.295251  593901 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:30:05.432257  593901 docker.go:233] disabling docker service ...
	I1008 19:30:05.432324  593901 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:30:05.446729  593901 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:30:05.460961  593901 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:30:05.592282  593901 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:30:05.702284  593901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:30:05.716016  593901 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:30:05.736442  593901 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:30:05.736506  593901 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.747582  593901 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:30:05.747664  593901 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.758466  593901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.770168  593901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.780664  593901 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:30:05.791671  593901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.802293  593901 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.821268  593901 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:30:05.831357  593901 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:30:05.840370  593901 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:30:05.840426  593901 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:30:05.853512  593901 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:30:05.862492  593901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:30:05.995914  593901 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:30:06.107452  593901 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:30:06.107528  593901 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:30:06.113051  593901 start.go:563] Will wait 60s for crictl version
	I1008 19:30:06.113113  593901 ssh_runner.go:195] Run: which crictl
	I1008 19:30:06.117318  593901 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:30:06.157434  593901 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:30:06.157506  593901 ssh_runner.go:195] Run: crio --version
	I1008 19:30:06.189181  593901 ssh_runner.go:195] Run: crio --version
	I1008 19:30:06.220459  593901 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:30:04.821334  594684 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 19:30:04.821546  594684 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:30:04.821611  594684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:30:04.838361  594684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I1008 19:30:04.838834  594684 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:30:04.839421  594684 main.go:141] libmachine: Using API Version  1
	I1008 19:30:04.839446  594684 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:30:04.839866  594684 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:30:04.840074  594684 main.go:141] libmachine: (calico-981259) Calling .GetMachineName
	I1008 19:30:04.840265  594684 main.go:141] libmachine: (calico-981259) Calling .DriverName
	I1008 19:30:04.840420  594684 start.go:159] libmachine.API.Create for "calico-981259" (driver="kvm2")
	I1008 19:30:04.840453  594684 client.go:168] LocalClient.Create starting
	I1008 19:30:04.840484  594684 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem
	I1008 19:30:04.840522  594684 main.go:141] libmachine: Decoding PEM data...
	I1008 19:30:04.840548  594684 main.go:141] libmachine: Parsing certificate...
	I1008 19:30:04.840620  594684 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem
	I1008 19:30:04.840643  594684 main.go:141] libmachine: Decoding PEM data...
	I1008 19:30:04.840662  594684 main.go:141] libmachine: Parsing certificate...
	I1008 19:30:04.840690  594684 main.go:141] libmachine: Running pre-create checks...
	I1008 19:30:04.840708  594684 main.go:141] libmachine: (calico-981259) Calling .PreCreateCheck
	I1008 19:30:04.841049  594684 main.go:141] libmachine: (calico-981259) Calling .GetConfigRaw
	I1008 19:30:04.841527  594684 main.go:141] libmachine: Creating machine...
	I1008 19:30:04.841549  594684 main.go:141] libmachine: (calico-981259) Calling .Create
	I1008 19:30:04.841686  594684 main.go:141] libmachine: (calico-981259) Creating KVM machine...
	I1008 19:30:04.842853  594684 main.go:141] libmachine: (calico-981259) DBG | found existing default KVM network
	I1008 19:30:04.844442  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:04.844281  595579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f610}
	I1008 19:30:04.844474  594684 main.go:141] libmachine: (calico-981259) DBG | created network xml: 
	I1008 19:30:04.844503  594684 main.go:141] libmachine: (calico-981259) DBG | <network>
	I1008 19:30:04.844517  594684 main.go:141] libmachine: (calico-981259) DBG |   <name>mk-calico-981259</name>
	I1008 19:30:04.844525  594684 main.go:141] libmachine: (calico-981259) DBG |   <dns enable='no'/>
	I1008 19:30:04.844538  594684 main.go:141] libmachine: (calico-981259) DBG |   
	I1008 19:30:04.844550  594684 main.go:141] libmachine: (calico-981259) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 19:30:04.844566  594684 main.go:141] libmachine: (calico-981259) DBG |     <dhcp>
	I1008 19:30:04.844578  594684 main.go:141] libmachine: (calico-981259) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 19:30:04.844586  594684 main.go:141] libmachine: (calico-981259) DBG |     </dhcp>
	I1008 19:30:04.844608  594684 main.go:141] libmachine: (calico-981259) DBG |   </ip>
	I1008 19:30:04.844619  594684 main.go:141] libmachine: (calico-981259) DBG |   
	I1008 19:30:04.844627  594684 main.go:141] libmachine: (calico-981259) DBG | </network>
	I1008 19:30:04.844640  594684 main.go:141] libmachine: (calico-981259) DBG | 
	I1008 19:30:04.849476  594684 main.go:141] libmachine: (calico-981259) DBG | trying to create private KVM network mk-calico-981259 192.168.39.0/24...
	I1008 19:30:04.925801  594684 main.go:141] libmachine: (calico-981259) DBG | private KVM network mk-calico-981259 192.168.39.0/24 created
	I1008 19:30:04.925831  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:04.925749  595579 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:30:04.925843  594684 main.go:141] libmachine: (calico-981259) Setting up store path in /home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259 ...
	I1008 19:30:04.925866  594684 main.go:141] libmachine: (calico-981259) Building disk image from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 19:30:04.925892  594684 main.go:141] libmachine: (calico-981259) Downloading /home/jenkins/minikube-integration/19774-529764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1008 19:30:05.201260  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:05.201141  595579 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259/id_rsa...
	I1008 19:30:05.311091  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:05.310954  595579 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259/calico-981259.rawdisk...
	I1008 19:30:05.311121  594684 main.go:141] libmachine: (calico-981259) DBG | Writing magic tar header
	I1008 19:30:05.311141  594684 main.go:141] libmachine: (calico-981259) DBG | Writing SSH key tar header
	I1008 19:30:05.311165  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:05.311126  595579 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259 ...
	I1008 19:30:05.311256  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259
	I1008 19:30:05.311297  594684 main.go:141] libmachine: (calico-981259) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259 (perms=drwx------)
	I1008 19:30:05.311317  594684 main.go:141] libmachine: (calico-981259) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube/machines (perms=drwxr-xr-x)
	I1008 19:30:05.311328  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube/machines
	I1008 19:30:05.311349  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:30:05.311359  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19774-529764
	I1008 19:30:05.311369  594684 main.go:141] libmachine: (calico-981259) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764/.minikube (perms=drwxr-xr-x)
	I1008 19:30:05.311390  594684 main.go:141] libmachine: (calico-981259) Setting executable bit set on /home/jenkins/minikube-integration/19774-529764 (perms=drwxrwxr-x)
	I1008 19:30:05.311404  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1008 19:30:05.311413  594684 main.go:141] libmachine: (calico-981259) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 19:30:05.311428  594684 main.go:141] libmachine: (calico-981259) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 19:30:05.311438  594684 main.go:141] libmachine: (calico-981259) Creating domain...
	I1008 19:30:05.311446  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home/jenkins
	I1008 19:30:05.311463  594684 main.go:141] libmachine: (calico-981259) DBG | Checking permissions on dir: /home
	I1008 19:30:05.311478  594684 main.go:141] libmachine: (calico-981259) DBG | Skipping /home - not owner
	I1008 19:30:05.312676  594684 main.go:141] libmachine: (calico-981259) define libvirt domain using xml: 
	I1008 19:30:05.312691  594684 main.go:141] libmachine: (calico-981259) <domain type='kvm'>
	I1008 19:30:05.312715  594684 main.go:141] libmachine: (calico-981259)   <name>calico-981259</name>
	I1008 19:30:05.312726  594684 main.go:141] libmachine: (calico-981259)   <memory unit='MiB'>3072</memory>
	I1008 19:30:05.312733  594684 main.go:141] libmachine: (calico-981259)   <vcpu>2</vcpu>
	I1008 19:30:05.312742  594684 main.go:141] libmachine: (calico-981259)   <features>
	I1008 19:30:05.312749  594684 main.go:141] libmachine: (calico-981259)     <acpi/>
	I1008 19:30:05.312757  594684 main.go:141] libmachine: (calico-981259)     <apic/>
	I1008 19:30:05.312774  594684 main.go:141] libmachine: (calico-981259)     <pae/>
	I1008 19:30:05.312789  594684 main.go:141] libmachine: (calico-981259)     
	I1008 19:30:05.312800  594684 main.go:141] libmachine: (calico-981259)   </features>
	I1008 19:30:05.312806  594684 main.go:141] libmachine: (calico-981259)   <cpu mode='host-passthrough'>
	I1008 19:30:05.312812  594684 main.go:141] libmachine: (calico-981259)   
	I1008 19:30:05.312818  594684 main.go:141] libmachine: (calico-981259)   </cpu>
	I1008 19:30:05.312828  594684 main.go:141] libmachine: (calico-981259)   <os>
	I1008 19:30:05.312834  594684 main.go:141] libmachine: (calico-981259)     <type>hvm</type>
	I1008 19:30:05.312844  594684 main.go:141] libmachine: (calico-981259)     <boot dev='cdrom'/>
	I1008 19:30:05.312850  594684 main.go:141] libmachine: (calico-981259)     <boot dev='hd'/>
	I1008 19:30:05.312859  594684 main.go:141] libmachine: (calico-981259)     <bootmenu enable='no'/>
	I1008 19:30:05.312866  594684 main.go:141] libmachine: (calico-981259)   </os>
	I1008 19:30:05.312873  594684 main.go:141] libmachine: (calico-981259)   <devices>
	I1008 19:30:05.312880  594684 main.go:141] libmachine: (calico-981259)     <disk type='file' device='cdrom'>
	I1008 19:30:05.312892  594684 main.go:141] libmachine: (calico-981259)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259/boot2docker.iso'/>
	I1008 19:30:05.312902  594684 main.go:141] libmachine: (calico-981259)       <target dev='hdc' bus='scsi'/>
	I1008 19:30:05.312909  594684 main.go:141] libmachine: (calico-981259)       <readonly/>
	I1008 19:30:05.312918  594684 main.go:141] libmachine: (calico-981259)     </disk>
	I1008 19:30:05.312927  594684 main.go:141] libmachine: (calico-981259)     <disk type='file' device='disk'>
	I1008 19:30:05.312938  594684 main.go:141] libmachine: (calico-981259)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 19:30:05.312953  594684 main.go:141] libmachine: (calico-981259)       <source file='/home/jenkins/minikube-integration/19774-529764/.minikube/machines/calico-981259/calico-981259.rawdisk'/>
	I1008 19:30:05.312965  594684 main.go:141] libmachine: (calico-981259)       <target dev='hda' bus='virtio'/>
	I1008 19:30:05.312974  594684 main.go:141] libmachine: (calico-981259)     </disk>
	I1008 19:30:05.312984  594684 main.go:141] libmachine: (calico-981259)     <interface type='network'>
	I1008 19:30:05.312993  594684 main.go:141] libmachine: (calico-981259)       <source network='mk-calico-981259'/>
	I1008 19:30:05.313002  594684 main.go:141] libmachine: (calico-981259)       <model type='virtio'/>
	I1008 19:30:05.313010  594684 main.go:141] libmachine: (calico-981259)     </interface>
	I1008 19:30:05.313029  594684 main.go:141] libmachine: (calico-981259)     <interface type='network'>
	I1008 19:30:05.313040  594684 main.go:141] libmachine: (calico-981259)       <source network='default'/>
	I1008 19:30:05.313046  594684 main.go:141] libmachine: (calico-981259)       <model type='virtio'/>
	I1008 19:30:05.313054  594684 main.go:141] libmachine: (calico-981259)     </interface>
	I1008 19:30:05.313066  594684 main.go:141] libmachine: (calico-981259)     <serial type='pty'>
	I1008 19:30:05.313077  594684 main.go:141] libmachine: (calico-981259)       <target port='0'/>
	I1008 19:30:05.313084  594684 main.go:141] libmachine: (calico-981259)     </serial>
	I1008 19:30:05.313096  594684 main.go:141] libmachine: (calico-981259)     <console type='pty'>
	I1008 19:30:05.313102  594684 main.go:141] libmachine: (calico-981259)       <target type='serial' port='0'/>
	I1008 19:30:05.313109  594684 main.go:141] libmachine: (calico-981259)     </console>
	I1008 19:30:05.313115  594684 main.go:141] libmachine: (calico-981259)     <rng model='virtio'>
	I1008 19:30:05.313123  594684 main.go:141] libmachine: (calico-981259)       <backend model='random'>/dev/random</backend>
	I1008 19:30:05.313132  594684 main.go:141] libmachine: (calico-981259)     </rng>
	I1008 19:30:05.313138  594684 main.go:141] libmachine: (calico-981259)     
	I1008 19:30:05.313146  594684 main.go:141] libmachine: (calico-981259)     
	I1008 19:30:05.313153  594684 main.go:141] libmachine: (calico-981259)   </devices>
	I1008 19:30:05.313168  594684 main.go:141] libmachine: (calico-981259) </domain>
	I1008 19:30:05.313182  594684 main.go:141] libmachine: (calico-981259) 
	I1008 19:30:05.318109  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:3d:8f:63 in network default
	I1008 19:30:05.318790  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:05.318821  594684 main.go:141] libmachine: (calico-981259) Ensuring networks are active...
	I1008 19:30:05.319510  594684 main.go:141] libmachine: (calico-981259) Ensuring network default is active
	I1008 19:30:05.319813  594684 main.go:141] libmachine: (calico-981259) Ensuring network mk-calico-981259 is active
	I1008 19:30:05.320268  594684 main.go:141] libmachine: (calico-981259) Getting domain xml...
	I1008 19:30:05.320932  594684 main.go:141] libmachine: (calico-981259) Creating domain...
	I1008 19:30:06.604875  594684 main.go:141] libmachine: (calico-981259) Waiting to get IP...
	I1008 19:30:06.605944  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:06.606512  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:06.606541  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:06.606464  595579 retry.go:31] will retry after 292.812666ms: waiting for machine to come up
	I1008 19:30:06.901199  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:06.901745  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:06.901775  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:06.901701  595579 retry.go:31] will retry after 289.177598ms: waiting for machine to come up
	I1008 19:30:07.192449  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:07.193001  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:07.193034  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:07.192947  595579 retry.go:31] will retry after 307.400152ms: waiting for machine to come up
	I1008 19:30:07.502550  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:07.503094  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:07.503134  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:07.503042  595579 retry.go:31] will retry after 547.279786ms: waiting for machine to come up
	I1008 19:30:08.051552  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:08.052042  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:08.052072  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:08.051987  595579 retry.go:31] will retry after 541.511552ms: waiting for machine to come up
	I1008 19:30:06.221645  593901 main.go:141] libmachine: (kindnet-981259) Calling .GetIP
	I1008 19:30:06.224948  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:06.225368  593901 main.go:141] libmachine: (kindnet-981259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b2:c4", ip: ""} in network mk-kindnet-981259: {Iface:virbr1 ExpiryTime:2024-10-08 20:29:55 +0000 UTC Type:0 Mac:52:54:00:ab:b2:c4 Iaid: IPaddr:192.168.72.93 Prefix:24 Hostname:kindnet-981259 Clientid:01:52:54:00:ab:b2:c4}
	I1008 19:30:06.225388  593901 main.go:141] libmachine: (kindnet-981259) DBG | domain kindnet-981259 has defined IP address 192.168.72.93 and MAC address 52:54:00:ab:b2:c4 in network mk-kindnet-981259
	I1008 19:30:06.225638  593901 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:30:06.229769  593901 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:30:06.242834  593901 kubeadm.go:883] updating cluster {Name:kindnet-981259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:kindnet-981259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:30:06.242957  593901 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:30:06.242997  593901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:30:06.283447  593901 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:30:06.283516  593901 ssh_runner.go:195] Run: which lz4
	I1008 19:30:06.288395  593901 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:30:06.293056  593901 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:30:06.293084  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:30:07.704858  593901 crio.go:462] duration metric: took 1.41650521s to copy over tarball
	I1008 19:30:07.704940  593901 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:30:09.831603  593901 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126623691s)
	I1008 19:30:09.831649  593901 crio.go:469] duration metric: took 2.126757774s to extract the tarball
	I1008 19:30:09.831686  593901 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:30:09.869326  593901 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:30:09.911413  593901 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:30:09.911447  593901 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:30:09.911458  593901 kubeadm.go:934] updating node { 192.168.72.93 8443 v1.31.1 crio true true} ...
	I1008 19:30:09.911591  593901 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-981259 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kindnet-981259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1008 19:30:09.911702  593901 ssh_runner.go:195] Run: crio config
	I1008 19:30:09.967009  593901 cni.go:84] Creating CNI manager for "kindnet"
	I1008 19:30:09.967065  593901 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:30:09.967177  593901 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.93 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-981259 NodeName:kindnet-981259 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:30:09.967565  593901 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-981259"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:30:09.967932  593901 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:30:09.982447  593901 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:30:09.982521  593901 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:30:09.995618  593901 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1008 19:30:10.015829  593901 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:30:10.036130  593901 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I1008 19:30:10.056233  593901 ssh_runner.go:195] Run: grep 192.168.72.93	control-plane.minikube.internal$ /etc/hosts
	I1008 19:30:10.060649  593901 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:30:10.073426  593901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:30:10.196544  593901 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:30:10.213963  593901 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259 for IP: 192.168.72.93
	I1008 19:30:10.213991  593901 certs.go:194] generating shared ca certs ...
	I1008 19:30:10.214015  593901 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.214216  593901 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:30:10.214273  593901 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:30:10.214286  593901 certs.go:256] generating profile certs ...
	I1008 19:30:10.214378  593901 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/client.key
	I1008 19:30:10.214406  593901 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/client.crt with IP's: []
	I1008 19:30:10.334455  593901 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/client.crt ...
	I1008 19:30:10.334489  593901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/client.crt: {Name:mk47e6a7ca43e7223e8d64412859608114c269fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.334691  593901 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/client.key ...
	I1008 19:30:10.334708  593901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/client.key: {Name:mkc468f53cb70c97d432088fb9b02b03815b31dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.334823  593901 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.key.62c107e8
	I1008 19:30:10.334841  593901 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.crt.62c107e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.93]
	I1008 19:30:10.424176  593901 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.crt.62c107e8 ...
	I1008 19:30:10.424207  593901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.crt.62c107e8: {Name:mk2fb333c3b55b35e45bbfbef279e4395a202314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.424396  593901 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.key.62c107e8 ...
	I1008 19:30:10.424414  593901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.key.62c107e8: {Name:mkd81e287a38de92e0b1a35369a5752995247683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.424524  593901 certs.go:381] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.crt.62c107e8 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.crt
	I1008 19:30:10.424630  593901 certs.go:385] copying /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.key.62c107e8 -> /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.key
	I1008 19:30:10.424691  593901 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.key
	I1008 19:30:10.424710  593901 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.crt with IP's: []
	I1008 19:30:10.495700  593901 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.crt ...
	I1008 19:30:10.495731  593901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.crt: {Name:mk25dfdcd08e1e8896eefd6e86a9669c9338f8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.495908  593901 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.key ...
	I1008 19:30:10.495942  593901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.key: {Name:mk960a5a1931971fd01eddcbef00371890da4502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:30:10.496164  593901 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:30:10.496204  593901 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:30:10.496215  593901 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:30:10.496250  593901 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:30:10.496278  593901 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:30:10.496301  593901 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:30:10.496339  593901 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:30:10.496995  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:30:10.521724  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:30:10.544744  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:30:10.567208  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:30:10.593217  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 19:30:10.619834  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:30:10.642888  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:30:10.665991  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/kindnet-981259/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:30:10.689338  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:30:10.712866  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:30:10.735621  593901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:30:10.758356  593901 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:30:10.774707  593901 ssh_runner.go:195] Run: openssl version
	I1008 19:30:10.780571  593901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:30:10.790546  593901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:30:10.794897  593901 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:30:10.794942  593901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:30:10.800414  593901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:30:10.810613  593901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:30:10.820443  593901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:30:10.824888  593901 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:30:10.824937  593901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:30:10.830588  593901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:30:10.840825  593901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:30:10.851623  593901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:30:10.855909  593901 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:30:10.855958  593901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:30:10.861253  593901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:30:10.871419  593901 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:30:10.875344  593901 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 19:30:10.875395  593901 kubeadm.go:392] StartCluster: {Name:kindnet-981259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:kindnet-981259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:30:10.875467  593901 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:30:10.875524  593901 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:30:10.923558  593901 cri.go:89] found id: ""
	I1008 19:30:10.923629  593901 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:30:10.936687  593901 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:30:10.950346  593901 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:30:10.964201  593901 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:30:10.964235  593901 kubeadm.go:157] found existing configuration files:
	
	I1008 19:30:10.964292  593901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:30:10.973988  593901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:30:10.974041  593901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:30:10.983335  593901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:30:10.993375  593901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:30:10.993439  593901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:30:11.002491  593901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:30:11.011581  593901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:30:11.011630  593901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:30:11.020922  593901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:30:11.029425  593901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:30:11.029482  593901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:30:11.039325  593901 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:30:11.098349  593901 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:30:11.098506  593901 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:30:11.202539  593901 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:30:11.202723  593901 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:30:11.202891  593901 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:30:11.219012  593901 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:30:08.594806  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:08.595306  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:08.595331  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:08.595265  595579 retry.go:31] will retry after 573.710406ms: waiting for machine to come up
	I1008 19:30:09.171153  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:09.171731  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:09.171770  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:09.171672  595579 retry.go:31] will retry after 1.010769784s: waiting for machine to come up
	I1008 19:30:10.184678  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:10.185604  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:10.185649  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:10.185553  595579 retry.go:31] will retry after 1.1428717s: waiting for machine to come up
	I1008 19:30:11.330661  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:11.331102  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:11.331127  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:11.331058  595579 retry.go:31] will retry after 1.678248484s: waiting for machine to come up
	I1008 19:30:13.011950  594684 main.go:141] libmachine: (calico-981259) DBG | domain calico-981259 has defined MAC address 52:54:00:a2:8b:32 in network mk-calico-981259
	I1008 19:30:13.012364  594684 main.go:141] libmachine: (calico-981259) DBG | unable to find current IP address of domain calico-981259 in network mk-calico-981259
	I1008 19:30:13.012396  594684 main.go:141] libmachine: (calico-981259) DBG | I1008 19:30:13.012323  595579 retry.go:31] will retry after 2.109027468s: waiting for machine to come up
	I1008 19:30:11.276200  593901 out.go:235]   - Generating certificates and keys ...
	I1008 19:30:11.276353  593901 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:30:11.276442  593901 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:30:11.522073  593901 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 19:30:11.748035  593901 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1008 19:30:12.293086  593901 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1008 19:30:12.386023  593901 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1008 19:30:12.660538  593901 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1008 19:30:12.660724  593901 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-981259 localhost] and IPs [192.168.72.93 127.0.0.1 ::1]
	I1008 19:30:12.742410  593901 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1008 19:30:12.742642  593901 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-981259 localhost] and IPs [192.168.72.93 127.0.0.1 ::1]
	I1008 19:30:12.968978  593901 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 19:30:13.073769  593901 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 19:30:13.168991  593901 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1008 19:30:13.169208  593901 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:30:13.294685  593901 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:30:13.503945  593901 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:30:13.854391  593901 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:30:13.958872  593901 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:30:14.016516  593901 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:30:14.017446  593901 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:30:14.019921  593901 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:30:14.021653  593901 out.go:235]   - Booting up control plane ...
	I1008 19:30:14.021801  593901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:30:14.021912  593901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:30:14.022008  593901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:30:14.041781  593901 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:30:14.049843  593901 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:30:14.049905  593901 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:30:14.173767  593901 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:30:14.173974  593901 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.505926282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415815505897988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=886fdb49-089e-4170-853f-2708133b1fb9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.506682131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f560cf2-491f-4eb2-b84e-8ac2bc2c7404 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.506748548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f560cf2-491f-4eb2-b84e-8ac2bc2c7404 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.507009042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f560cf2-491f-4eb2-b84e-8ac2bc2c7404 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.557356298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f56782c-7865-4802-81f3-cef0f52bdbc3 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.557482766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f56782c-7865-4802-81f3-cef0f52bdbc3 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.559439966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=167bb840-53d2-4ff4-a849-caf5f23e783a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.559979613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415815559949106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=167bb840-53d2-4ff4-a849-caf5f23e783a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.560737483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93a1b0cc-a169-4900-bbc6-a8ae3856da04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.560900882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93a1b0cc-a169-4900-bbc6-a8ae3856da04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.561534693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93a1b0cc-a169-4900-bbc6-a8ae3856da04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.598560679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75cee1ea-5202-4239-a423-c3456ddce70e name=/runtime.v1.RuntimeService/Version
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.598653636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75cee1ea-5202-4239-a423-c3456ddce70e name=/runtime.v1.RuntimeService/Version
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.602465985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0d660a9-ac80-48f1-89b4-c3f977ef523a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.603041873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415815603004168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0d660a9-ac80-48f1-89b4-c3f977ef523a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.603708888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=004ca59d-eca8-40d5-931a-a53ee59900e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.603775875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=004ca59d-eca8-40d5-931a-a53ee59900e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.604172074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=004ca59d-eca8-40d5-931a-a53ee59900e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.638698282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=054b2f02-3b18-4c4d-9431-9d51b4701fe1 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.638770252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=054b2f02-3b18-4c4d-9431-9d51b4701fe1 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.640438926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ff308b8-2cc1-4f62-b5c4-4e17ae8787a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.640825105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415815640805315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ff308b8-2cc1-4f62-b5c4-4e17ae8787a2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.641497929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40a2d305-acc7-4a2e-844e-0a30fbcb53f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.641572043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40a2d305-acc7-4a2e-844e-0a30fbcb53f1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:30:15 default-k8s-diff-port-142496 crio[699]: time="2024-10-08 19:30:15.641760461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7,PodSandboxId:aef121dc11aa07f1f5426be94ec9801733020fec42ba1e1ad5e85f4b78241f4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771922027170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x4j67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89141081-eb1e-466a-913d-597e8df02125,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96,PodSandboxId:8a22a6828aca8186cf4dd66f43d1ac57471c4fc6b4eb964ea8b342f755630d42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414771870971471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wrz7s,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e441884e-7c57-4a73-86bb-c46629d2eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce,PodSandboxId:82ad50a49857f1fa78dc670e4c7a268eecac5041fd93415de1faf94ed94d153e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTA
INER_RUNNING,CreatedAt:1728414771462572135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wd5kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714118a5-ec5d-448c-ad63-7f0303d00eb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938,PodSandboxId:929cdea5e6572c2524cceb89186a62b39425a8bf84f91d5933caeebd9c19a70c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1
728414771397418608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c57b3f-59d9-49bb-ba82-caee6af45bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1,PodSandboxId:7f7fcc2d3657a06309318fbad24ae0ec0fc80094f5e3bcd6e99e42ac7fad08b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414759441721556,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a73ab945a8a2e63f2d0e0a2a3fa9d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414,PodSandboxId:44750ffcb7f99e48cc08416e20a53f358684cf48876d33f23ac0ce449b86966d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414759379210518,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21240ab4672b709011cc56e9d7153a1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711,PodSandboxId:8b411e11c8bca3b1d42831d34f1c577392ca720ab9c761132109042fe0e8f1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414759357518447,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 463572b0fbfb93adebd54796294d940c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00,PodSandboxId:6c3f46d4c4069158ef20a744278fae5de197fbb87cd511ec8ab97ec7e98f0c75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414759326373612,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370,PodSandboxId:22b1938c28189a29e4a976007ff285d20612f552978e9065b8c28d612fed7d6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728414470408135987,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23aa41c5b7e4060e257e9fbf18f818b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40a2d305-acc7-4a2e-844e-0a30fbcb53f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4aade92288be       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   aef121dc11aa0       coredns-7c65d6cfc9-x4j67
	f4ffba8b3e548       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   8a22a6828aca8       coredns-7c65d6cfc9-wrz7s
	316c2be1cd9b8       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 minutes ago      Running             kube-proxy                0                   82ad50a49857f       kube-proxy-wd5kv
	1affa0d5c85f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   929cdea5e6572       storage-provisioner
	11ee4a9677fea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   7f7fcc2d3657a       etcd-default-k8s-diff-port-142496
	e28f698409b14       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 minutes ago      Running             kube-scheduler            2                   44750ffcb7f99       kube-scheduler-default-k8s-diff-port-142496
	143017fc423ec       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 minutes ago      Running             kube-controller-manager   2                   8b411e11c8bca       kube-controller-manager-default-k8s-diff-port-142496
	04efd41bf2d49       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Running             kube-apiserver            2                   6c3f46d4c4069       kube-apiserver-default-k8s-diff-port-142496
	ab91519f523bd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 minutes ago      Exited              kube-apiserver            1                   22b1938c28189       kube-apiserver-default-k8s-diff-port-142496
	
	
	==> coredns [a4aade92288be06e69bbaf1f647561dfe841105d0ffd8c2bb60c9f68fe22a7e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f4ffba8b3e548d8dce04c71fa1fd0345ace4aa632a76654726f9bc0cf6526b96] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-142496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-142496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=default-k8s-diff-port-142496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 19:12:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-142496
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 19:30:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 19:28:11 +0000   Tue, 08 Oct 2024 19:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 19:28:11 +0000   Tue, 08 Oct 2024 19:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 19:28:11 +0000   Tue, 08 Oct 2024 19:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 19:28:11 +0000   Tue, 08 Oct 2024 19:12:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.213
	  Hostname:    default-k8s-diff-port-142496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e918f3f98174aa5aaa05fc0956fcda2
	  System UUID:                8e918f3f-9817-4aa5-aaa0-5fc0956fcda2
	  Boot ID:                    5e0b3d23-4e67-45eb-89f9-edcb3778f372
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-wrz7s                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-x4j67                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-142496                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-142496             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-142496    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-wd5kv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-142496             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-wvh5g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-142496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-142496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-142496 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node default-k8s-diff-port-142496 event: Registered Node default-k8s-diff-port-142496 in Controller
	
	
	==> dmesg <==
	[  +0.052034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045144] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.968150] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471282] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.569606] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.751066] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.056308] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082075] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.203244] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.168564] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.296220] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +4.177346] systemd-fstab-generator[784]: Ignoring "noauto" option for root device
	[  +2.033266] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.062972] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.557632] kauditd_printk_skb: 69 callbacks suppressed
	[Oct 8 19:08] kauditd_printk_skb: 87 callbacks suppressed
	[Oct 8 19:12] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.305003] systemd-fstab-generator[2556]: Ignoring "noauto" option for root device
	[  +4.709444] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.352522] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +5.400838] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[  +0.115383] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.558924] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [11ee4a9677fea92ead11bc08bfd537c1fd3df5ca22d96172800b60033d8438b1] <==
	{"level":"info","ts":"2024-10-08T19:12:40.405278Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:12:40.405462Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.213:2379"}
	{"level":"info","ts":"2024-10-08T19:12:40.405805Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:12:40.406590Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T19:22:40.459863Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2024-10-08T19:22:40.468744Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"8.404675ms","hash":1941118404,"current-db-size-bytes":2076672,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2076672,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-10-08T19:22:40.468798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1941118404,"revision":688,"compact-revision":-1}
	{"level":"info","ts":"2024-10-08T19:27:40.467818Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":930}
	{"level":"info","ts":"2024-10-08T19:27:40.471423Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":930,"took":"2.977018ms","hash":644045346,"current-db-size-bytes":2076672,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1478656,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-10-08T19:27:40.471499Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":644045346,"revision":930,"compact-revision":688}
	{"level":"info","ts":"2024-10-08T19:28:32.658931Z","caller":"traceutil/trace.go:171","msg":"trace[1513117180] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"344.222247ms","start":"2024-10-08T19:28:32.314676Z","end":"2024-10-08T19:28:32.658898Z","steps":["trace[1513117180] 'process raft request'  (duration: 344.11585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:28:32.660111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T19:28:32.314660Z","time spent":"344.733621ms","remote":"127.0.0.1:41968","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1217 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-08T19:28:33.924839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.915275ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6369376772210526992 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.213\" mod_revision:1211 > success:<request_put:<key:\"/registry/masterleases/192.168.50.213\" value_size:67 lease:6369376772210526989 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.213\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-08T19:28:33.925288Z","caller":"traceutil/trace.go:171","msg":"trace[998955090] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"387.567804ms","start":"2024-10-08T19:28:33.537701Z","end":"2024-10-08T19:28:33.925268Z","steps":["trace[998955090] 'process raft request'  (duration: 126.890686ms)","trace[998955090] 'compare'  (duration: 259.794416ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-08T19:28:33.925390Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-08T19:28:33.537685Z","time spent":"387.659903ms","remote":"127.0.0.1:41828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.213\" mod_revision:1211 > success:<request_put:<key:\"/registry/masterleases/192.168.50.213\" value_size:67 lease:6369376772210526989 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.213\" > >"}
	{"level":"warn","ts":"2024-10-08T19:28:58.657782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.013505ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6369376772210527138 > lease_revoke:<id:5864926d8c3b8b49>","response":"size:28"}
	{"level":"info","ts":"2024-10-08T19:28:59.072876Z","caller":"traceutil/trace.go:171","msg":"trace[1458573834] linearizableReadLoop","detail":"{readStateIndex:1452; appliedIndex:1451; }","duration":"215.015891ms","start":"2024-10-08T19:28:58.857841Z","end":"2024-10-08T19:28:59.072857Z","steps":["trace[1458573834] 'read index received'  (duration: 214.774513ms)","trace[1458573834] 'applied index is now lower than readState.Index'  (duration: 240.788µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-08T19:28:59.073040Z","caller":"traceutil/trace.go:171","msg":"trace[2102390268] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"216.541993ms","start":"2024-10-08T19:28:58.856491Z","end":"2024-10-08T19:28:59.073033Z","steps":["trace[2102390268] 'process raft request'  (duration: 216.215449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:28:59.073412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.545864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-08T19:28:59.074751Z","caller":"traceutil/trace.go:171","msg":"trace[4472791] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1240; }","duration":"216.894219ms","start":"2024-10-08T19:28:58.857838Z","end":"2024-10-08T19:28:59.074732Z","steps":["trace[4472791] 'agreement among raft nodes before linearized reading'  (duration: 215.530624ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-08T19:30:11.809761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.143379ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6369376772210527572 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-yzcmycmf25iwe3bpklon2nqhiu\" mod_revision:1291 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-yzcmycmf25iwe3bpklon2nqhiu\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-yzcmycmf25iwe3bpklon2nqhiu\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-08T19:30:11.810389Z","caller":"traceutil/trace.go:171","msg":"trace[1493403241] transaction","detail":"{read_only:false; response_revision:1299; number_of_response:1; }","duration":"214.95864ms","start":"2024-10-08T19:30:11.595395Z","end":"2024-10-08T19:30:11.810354Z","steps":["trace[1493403241] 'process raft request'  (duration: 40.98004ms)","trace[1493403241] 'compare'  (duration: 172.936665ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-08T19:30:13.688939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.120577ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-10-08T19:30:13.689159Z","caller":"traceutil/trace.go:171","msg":"trace[1810883228] transaction","detail":"{read_only:false; response_revision:1301; number_of_response:1; }","duration":"115.103218ms","start":"2024-10-08T19:30:13.574043Z","end":"2024-10-08T19:30:13.689146Z","steps":["trace[1810883228] 'process raft request'  (duration: 114.96155ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-08T19:30:13.689174Z","caller":"traceutil/trace.go:171","msg":"trace[811557113] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:1300; }","duration":"116.371678ms","start":"2024-10-08T19:30:13.572788Z","end":"2024-10-08T19:30:13.689160Z","steps":["trace[811557113] 'range keys from in-memory index tree'  (duration: 115.712612ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:30:16 up 22 min,  0 users,  load average: 0.23, 0.26, 0.23
	Linux default-k8s-diff-port-142496 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04efd41bf2d49bb7d9706353cbfe0c74e1e6f223b3554a2f5161b641a54eec00] <==
	I1008 19:25:42.795764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:25:42.795821       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:27:41.794371       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:27:41.794587       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1008 19:27:42.796847       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:27:42.796905       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:27:42.797023       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:27:42.797181       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:27:42.798111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:27:42.799214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:28:42.799229       1 handler_proxy.go:99] no RequestInfo found in the context
	W1008 19:28:42.799400       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:28:42.799466       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1008 19:28:42.799537       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1008 19:28:42.801366       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:28:42.801409       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ab91519f523bdd70789298f4698e16275e5c7a7691b0a3fdb15629f002d09370] <==
	W1008 19:12:31.127431       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:31.194449       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:31.230299       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:31.388938       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:34.931301       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:34.982564       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.227301       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.592387       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.905015       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.932721       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:35.988741       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.020618       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.033261       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.074618       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.160589       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.312726       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.380883       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.431036       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.432408       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.436841       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.482118       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.529358       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.529466       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.594822       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 19:12:36.627438       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [143017fc423ec24453d3cfe823acf6039a8b8c1d7b916a7b69b3ead5cd1ac711] <==
	E1008 19:24:48.914789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:24:49.406259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:25:18.921774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:25:19.412813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:25:48.928659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:25:49.421273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:26:18.934873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:26:19.428923       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:26:48.941239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:26:49.436943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:27:18.947606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:27:19.452014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:27:48.954561       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:27:49.460472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:28:11.993157       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-142496"
	E1008 19:28:18.961529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:28:19.468292       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:28:48.968261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:28:49.475722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:28:52.407957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="279.719µs"
	I1008 19:29:05.408765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="190.902µs"
	E1008 19:29:18.974737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:29:19.490388       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:29:48.981233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:29:49.499740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [316c2be1cd9b8e8450c8cd04cd26df20ec6f91633b3183352ce6251d92b9acce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 19:12:51.851174       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 19:12:51.890000       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.213"]
	E1008 19:12:51.890323       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 19:12:52.017439       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 19:12:52.017495       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 19:12:52.017517       1 server_linux.go:169] "Using iptables Proxier"
	I1008 19:12:52.039117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 19:12:52.039388       1 server.go:483] "Version info" version="v1.31.1"
	I1008 19:12:52.039418       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:12:52.049874       1 config.go:199] "Starting service config controller"
	I1008 19:12:52.049915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 19:12:52.049938       1 config.go:105] "Starting endpoint slice config controller"
	I1008 19:12:52.049942       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 19:12:52.052179       1 config.go:328] "Starting node config controller"
	I1008 19:12:52.052206       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 19:12:52.150915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 19:12:52.150975       1 shared_informer.go:320] Caches are synced for service config
	I1008 19:12:52.152328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e28f698409b14b2b811361fdfd0a4f5c56068566671b3e9d9a5b7edc69484414] <==
	W1008 19:12:42.651601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 19:12:42.651771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.661163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1008 19:12:42.661241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.688906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 19:12:42.688937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.714485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 19:12:42.714533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.845514       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 19:12:42.845570       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1008 19:12:42.873975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 19:12:42.874042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.880483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 19:12:42.880540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:42.910132       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 19:12:42.910182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.012736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 19:12:43.012788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.022574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 19:12:43.022619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.074929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 19:12:43.074984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1008 19:12:43.133237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 19:12:43.133290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1008 19:12:46.015128       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 19:29:05 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:05.390546    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:29:14 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:14.683941    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415754683627237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:14 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:14.684320    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415754683627237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:16 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:16.389422    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:29:24 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:24.687378    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415764686807256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:24 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:24.687432    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415764686807256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:31 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:31.389362    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:29:34 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:34.690363    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415774689672574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:34 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:34.690809    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415774689672574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:44.391011    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:44.420659    2881 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:44.692840    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415784691891832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:44 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:44.692912    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415784691891832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:54 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:54.694185    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415794693874775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:54 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:54.694214    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415794693874775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:29:58 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:29:58.389625    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:30:04 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:30:04.696222    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415804695677765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:30:04 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:30:04.696546    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415804695677765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:30:11 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:30:11.390610    2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-wvh5g" podUID="99dacec0-80f9-4662-bbea-6191aa9b62d3"
	Oct 08 19:30:14 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:30:14.698690    2881 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415814698312458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:30:14 default-k8s-diff-port-142496 kubelet[2881]: E1008 19:30:14.698733    2881 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415814698312458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1affa0d5c85f358da06804af3f41e2a0a3d86d2269df2457a8af686559809938] <==
	I1008 19:12:51.528840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 19:12:51.566358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 19:12:51.566436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 19:12:51.585541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 19:12:51.585680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142496_03500725-b624-4c94-9168-9eb5a541bcc4!
	I1008 19:12:51.594144       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1108c641-0a60-4a0b-a727-c64300ada9de", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-142496_03500725-b624-4c94-9168-9eb5a541bcc4 became leader
	I1008 19:12:51.686513       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142496_03500725-b624-4c94-9168-9eb5a541bcc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-wvh5g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 describe pod metrics-server-6867b74b74-wvh5g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-142496 describe pod metrics-server-6867b74b74-wvh5g: exit status 1 (78.279628ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-wvh5g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-142496 describe pod metrics-server-6867b74b74-wvh5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (491.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (347.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966632 -n no-preload-966632
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-08 19:28:00.938177913 +0000 UTC m=+6874.427330096
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-966632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-966632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (11.57µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-966632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-966632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-966632 logs -n 25: (1.185011536s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:27 UTC | 08 Oct 24 19:28 UTC |
	| start   | -p newest-cni-602180 --memory=2200 --alsologtostderr   | newest-cni-602180            | jenkins | v1.34.0 | 08 Oct 24 19:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:28:00
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:28:00.403809  592133 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:28:00.403935  592133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:28:00.403946  592133 out.go:358] Setting ErrFile to fd 2...
	I1008 19:28:00.403953  592133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:28:00.404126  592133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:28:00.404730  592133 out.go:352] Setting JSON to false
	I1008 19:28:00.405884  592133 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11432,"bootTime":1728404248,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:28:00.405987  592133 start.go:139] virtualization: kvm guest
	I1008 19:28:00.408125  592133 out.go:177] * [newest-cni-602180] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:28:00.409428  592133 notify.go:220] Checking for updates...
	I1008 19:28:00.409447  592133 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:28:00.410523  592133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:28:00.411671  592133 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:28:00.412887  592133 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:28:00.413932  592133 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:28:00.415057  592133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:28:00.416629  592133 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:28:00.416777  592133 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:28:00.416914  592133 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:28:00.417051  592133 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:28:00.453844  592133 out.go:177] * Using the kvm2 driver based on user configuration
	I1008 19:28:00.454956  592133 start.go:297] selected driver: kvm2
	I1008 19:28:00.454973  592133 start.go:901] validating driver "kvm2" against <nil>
	I1008 19:28:00.454985  592133 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:28:00.455708  592133 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:28:00.455792  592133 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:28:00.470563  592133 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:28:00.470611  592133 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1008 19:28:00.470672  592133 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1008 19:28:00.470967  592133 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 19:28:00.471022  592133 cni.go:84] Creating CNI manager for ""
	I1008 19:28:00.471073  592133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:28:00.471081  592133 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 19:28:00.471147  592133 start.go:340] cluster config:
	{Name:newest-cni-602180 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-602180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:28:00.471265  592133 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:28:00.472588  592133 out.go:177] * Starting "newest-cni-602180" primary control-plane node in "newest-cni-602180" cluster
	I1008 19:28:00.473517  592133 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:28:00.473547  592133 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1008 19:28:00.473561  592133 cache.go:56] Caching tarball of preloaded images
	I1008 19:28:00.473638  592133 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:28:00.473652  592133 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1008 19:28:00.473773  592133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/config.json ...
	I1008 19:28:00.473801  592133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/newest-cni-602180/config.json: {Name:mk567624c393beb5bd3e2562fb3e4f3254c9bf2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:28:00.473958  592133 start.go:360] acquireMachinesLock for newest-cni-602180: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:28:00.473989  592133 start.go:364] duration metric: took 16.905µs to acquireMachinesLock for "newest-cni-602180"
	I1008 19:28:00.474006  592133 start.go:93] Provisioning new machine with config: &{Name:newest-cni-602180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-602180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:28:00.474072  592133 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.517273864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415681517250781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86a25a6e-5f24-4176-b775-453b700c5140 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.517974898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a906c93d-86be-4728-861a-05c8f54f983f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.518028867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a906c93d-86be-4728-861a-05c8f54f983f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.518211668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a906c93d-86be-4728-861a-05c8f54f983f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.562106182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb3a41d6-4129-4014-932e-9bb1c2d49a4e name=/runtime.v1.RuntimeService/Version
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.562199323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb3a41d6-4129-4014-932e-9bb1c2d49a4e name=/runtime.v1.RuntimeService/Version
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.574473733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04ddb4bf-74bc-4b33-adf7-0b55eb114aa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.575296018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415681575269728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04ddb4bf-74bc-4b33-adf7-0b55eb114aa4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.575760021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=818416f5-473a-4ddc-a38b-b38fd992ded0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.576018477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=818416f5-473a-4ddc-a38b-b38fd992ded0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.576203027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=818416f5-473a-4ddc-a38b-b38fd992ded0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.615605604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1a1b17e-973b-435a-9d0f-be36840c4304 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.616104160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1a1b17e-973b-435a-9d0f-be36840c4304 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.617640832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4e28fac-bc2c-43e9-b388-75b6a2f5380c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.618038000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415681618018575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4e28fac-bc2c-43e9-b388-75b6a2f5380c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.618653919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9ebdd0a-0e38-4e6a-865e-ebc65bbada2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.618702584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9ebdd0a-0e38-4e6a-865e-ebc65bbada2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.618975044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9ebdd0a-0e38-4e6a-865e-ebc65bbada2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.653545721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d00a9a73-098a-4931-b248-7102fdcccb97 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.653626592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d00a9a73-098a-4931-b248-7102fdcccb97 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.655129904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf798be6-186f-46ba-a223-95db6c3b0804 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.655470851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415681655449896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf798be6-186f-46ba-a223-95db6c3b0804 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.655944809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e9e2a05-3584-467d-8f85-1e8e59fdff5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.656000140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e9e2a05-3584-467d-8f85-1e8e59fdff5b name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:28:01 no-preload-966632 crio[709]: time="2024-10-08 19:28:01.656179257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728414555931402068,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b49d58582dbc75d6683e739f2816cad6855c2e9b6fbfdc66e010c5ddc9fb5e3,PodSandboxId:e45a8291cc38a1c00209f4cef4a79821382e63952406d73d86f63064e1081a5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728414534740900473,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b00109c3-21fa-4966-b312-8aabc0302e65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789,PodSandboxId:62108e9cc22e81b69a48f790886a0eb5f9760bc7b603d6726afb6c52062d7045,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728414532750919916,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8qft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 585e6c86-8ece-4a3e-af02-7bb0a97063be,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8,PodSandboxId:33ad6c744ea88260805efa6f471658ccebdcec7e2d81c082f6a951cf44d82d79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728414525073167575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpnvm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3de1b-a732-4c1b-b9
cb-8c6fcd833717,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27,PodSandboxId:812d98aede5920298d3504dab607f05f9360ff55d9706ef28dcb2acb2012bda0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728414525051146574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c664c1f1-4350-423c-bd19-9e64e9efab
2e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af,PodSandboxId:6ba661acb123fbd02acd5e57079a1c9379127cc4b306b4712f1ecd520e4b0180,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728414520433677152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d04f5a02ea15da1cf409aab759adea,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e,PodSandboxId:56baf3c2256ddf179351c4c67bfc634ba1c44ddf2aa502b071e6b34e33cbf8b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728414520388251033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f35c1b50f7fb0d2b5453e3ffba617f00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59,PodSandboxId:b3b9e56a2c0f162511d1978659d3572ae16b3731f2a516944506f9f09c31aa2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728414520345514681,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02557b269ebb9e5ec7d0110b69fdacb5,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005,PodSandboxId:119d64c7893bb52b116c1db018a64fdf14209588364597f8d81d5b6dd5d1513c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728414520306087693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-966632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604150cce265c4bc86302cb2d653d29f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e9e2a05-3584-467d-8f85-1e8e59fdff5b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f17c106378228       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   812d98aede592       storage-provisioner
	0b49d58582dbc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   e45a8291cc38a       busybox
	09475152f3f1b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   62108e9cc22e8       coredns-7c65d6cfc9-r8qft
	f1591b11958e9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   33ad6c744ea88       kube-proxy-qpnvm
	035c2e708170e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   812d98aede592       storage-provisioner
	c8765b4e849e7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   6ba661acb123f       etcd-no-preload-966632
	51e1de45365e8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   56baf3c2256dd       kube-scheduler-no-preload-966632
	d97350daf0186       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   b3b9e56a2c0f1       kube-controller-manager-no-preload-966632
	ebd3d4cf59214       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   119d64c7893bb       kube-apiserver-no-preload-966632
	
	
	==> coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50679 - 29674 "HINFO IN 8047378031698476006.6929136164188044077. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.413715271s
	
	
	==> describe nodes <==
	Name:               no-preload-966632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-966632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=no-preload-966632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T18_59_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:59:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-966632
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 19:27:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 19:24:32 +0000   Tue, 08 Oct 2024 18:59:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 19:24:32 +0000   Tue, 08 Oct 2024 18:59:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 19:24:32 +0000   Tue, 08 Oct 2024 18:59:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 19:24:32 +0000   Tue, 08 Oct 2024 19:08:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.141
	  Hostname:    no-preload-966632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c76d7b5eb04f4388b86b4ad08c01e70a
	  System UUID:                c76d7b5e-b04f-4388-b86b-4ad08c01e70a
	  Boot ID:                    d5cdc3b8-6cce-4afd-835b-744b3f08d692
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-r8qft                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-966632                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-966632             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-966632    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-qpnvm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-966632             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-rlt25              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-966632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-966632 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-966632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-966632 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-966632 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-966632 event: Registered Node no-preload-966632 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-966632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-966632 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-966632 event: Registered Node no-preload-966632 in Controller
	
	
	==> dmesg <==
	[Oct 8 19:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062470] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043210] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.193173] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.466095] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606649] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.562209] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.055041] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069134] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.166523] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.142607] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.250072] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[ +15.370221] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.058162] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.733968] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +4.950518] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.545853] systemd-fstab-generator[1989]: Ignoring "noauto" option for root device
	[  +0.540103] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.287449] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] <==
	{"level":"info","ts":"2024-10-08T19:08:40.916939Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.141:2380"}
	{"level":"info","ts":"2024-10-08T19:08:40.919914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T19:08:40.920140Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.141:2380"}
	{"level":"info","ts":"2024-10-08T19:08:42.470697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-08T19:08:42.470809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-08T19:08:42.470982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 received MsgPreVoteResp from 41850776257dba86 at term 2"}
	{"level":"info","ts":"2024-10-08T19:08:42.471025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 became candidate at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.471050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 received MsgVoteResp from 41850776257dba86 at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.471077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"41850776257dba86 became leader at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.471120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 41850776257dba86 elected leader 41850776257dba86 at term 3"}
	{"level":"info","ts":"2024-10-08T19:08:42.482087Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T19:08:42.482177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T19:08:42.482219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T19:08:42.481920Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"41850776257dba86","local-member-attributes":"{Name:no-preload-966632 ClientURLs:[https://192.168.61.141:2379]}","request-path":"/0/members/41850776257dba86/attributes","cluster-id":"98daa217e16821c9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T19:08:42.482986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T19:08:42.483460Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:08:42.483941Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-08T19:08:42.484767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.141:2379"}
	{"level":"info","ts":"2024-10-08T19:08:42.485132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T19:18:42.516237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":866}
	{"level":"info","ts":"2024-10-08T19:18:42.525755Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":866,"took":"9.17137ms","hash":2850192731,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2617344,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-08T19:18:42.525823Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2850192731,"revision":866,"compact-revision":-1}
	{"level":"info","ts":"2024-10-08T19:23:42.523427Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1109}
	{"level":"info","ts":"2024-10-08T19:23:42.527926Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1109,"took":"3.863447ms","hash":583169935,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-08T19:23:42.528008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":583169935,"revision":1109,"compact-revision":866}
	
	
	==> kernel <==
	 19:28:01 up 19 min,  0 users,  load average: 0.11, 0.14, 0.15
	Linux no-preload-966632 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1008 19:23:44.803466       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:23:44.803517       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1008 19:23:44.804517       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:23:44.804680       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:24:44.805202       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:24:44.805327       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1008 19:24:44.805373       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:24:44.805387       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1008 19:24:44.806463       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:24:44.806519       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1008 19:26:44.806826       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:26:44.807180       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1008 19:26:44.806825       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 19:26:44.807455       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 19:26:44.808532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1008 19:26:44.808563       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] <==
	E1008 19:22:49.422938       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:22:50.054031       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:23:19.427642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:23:20.060992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:23:49.433602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:23:50.068890       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:24:19.440792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:24:20.078586       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:24:32.591147       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-966632"
	E1008 19:24:49.446932       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:24:49.724434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.716µs"
	I1008 19:24:50.085480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1008 19:25:04.723032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="242.958µs"
	E1008 19:25:19.453084       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:25:20.093934       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:25:49.459526       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:25:50.102306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:26:19.465991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:26:20.110039       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:26:49.474685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:26:50.118309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:27:19.481554       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:27:20.125310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1008 19:27:49.487474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 19:27:50.133417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 19:08:45.253655       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 19:08:45.263023       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.141"]
	E1008 19:08:45.263089       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 19:08:45.296803       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1008 19:08:45.296903       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 19:08:45.296922       1 server_linux.go:169] "Using iptables Proxier"
	I1008 19:08:45.299297       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 19:08:45.299509       1 server.go:483] "Version info" version="v1.31.1"
	I1008 19:08:45.299539       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:08:45.301341       1 config.go:199] "Starting service config controller"
	I1008 19:08:45.301374       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 19:08:45.301404       1 config.go:105] "Starting endpoint slice config controller"
	I1008 19:08:45.301424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 19:08:45.303578       1 config.go:328] "Starting node config controller"
	I1008 19:08:45.303686       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 19:08:45.401542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 19:08:45.401653       1 shared_informer.go:320] Caches are synced for service config
	I1008 19:08:45.403819       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] <==
	I1008 19:08:41.711974       1 serving.go:386] Generated self-signed cert in-memory
	W1008 19:08:43.753949       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 19:08:43.754145       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 19:08:43.754232       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 19:08:43.754258       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 19:08:43.838289       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1008 19:08:43.838389       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 19:08:43.847549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 19:08:43.848079       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 19:08:43.849921       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 19:08:43.849980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 19:08:43.951049       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 19:26:49 no-preload-966632 kubelet[1365]: E1008 19:26:49.985899    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415609984638821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:26:59 no-preload-966632 kubelet[1365]: E1008 19:26:59.710115    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:26:59 no-preload-966632 kubelet[1365]: E1008 19:26:59.987956    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415619987477297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:26:59 no-preload-966632 kubelet[1365]: E1008 19:26:59.987982    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415619987477297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:09 no-preload-966632 kubelet[1365]: E1008 19:27:09.990100    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415629989118929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:09 no-preload-966632 kubelet[1365]: E1008 19:27:09.990450    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415629989118929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:13 no-preload-966632 kubelet[1365]: E1008 19:27:13.708770    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:27:19 no-preload-966632 kubelet[1365]: E1008 19:27:19.993287    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415639993057923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:19 no-preload-966632 kubelet[1365]: E1008 19:27:19.994043    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415639993057923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:26 no-preload-966632 kubelet[1365]: E1008 19:27:26.707787    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:27:29 no-preload-966632 kubelet[1365]: E1008 19:27:29.995728    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415649995210339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:29 no-preload-966632 kubelet[1365]: E1008 19:27:29.996046    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415649995210339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]: E1008 19:27:39.709395    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]: E1008 19:27:39.722464    1365 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]: E1008 19:27:39.997043    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415659996748292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:39 no-preload-966632 kubelet[1365]: E1008 19:27:39.997066    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415659996748292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:49 no-preload-966632 kubelet[1365]: E1008 19:27:49.998915    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415669997632805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:49 no-preload-966632 kubelet[1365]: E1008 19:27:49.998938    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415669997632805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:27:53 no-preload-966632 kubelet[1365]: E1008 19:27:53.709267    1365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlt25" podUID="f89db6b4-a0fd-43c3-a2ba-65d8c2de3617"
	Oct 08 19:27:59 no-preload-966632 kubelet[1365]: E1008 19:27:59.999832    1365 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415679999520475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 19:28:00 no-preload-966632 kubelet[1365]: E1008 19:28:00.000172    1365 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415679999520475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] <==
	I1008 19:08:45.149384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 19:09:15.152591       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] <==
	I1008 19:09:16.015381       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 19:09:16.025202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 19:09:16.025316       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 19:09:33.428710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 19:09:33.429221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-966632_da8eff02-06a8-4eff-bbc9-8851223a9e34!
	I1008 19:09:33.429305       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5bd04e67-ee9b-4a46-933b-412c58b00453", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-966632_da8eff02-06a8-4eff-bbc9-8851223a9e34 became leader
	I1008 19:09:33.530199       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-966632_da8eff02-06a8-4eff-bbc9-8851223a9e34!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966632 -n no-preload-966632
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-966632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rlt25
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-966632 describe pod metrics-server-6867b74b74-rlt25
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-966632 describe pod metrics-server-6867b74b74-rlt25: exit status 1 (65.622459ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rlt25" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-966632 describe pod metrics-server-6867b74b74-rlt25: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (347.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (164.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:25:51.765103  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
E1008 19:26:38.895837  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.90:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.90:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (255.744235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-256554" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-256554 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-256554 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.904µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-256554 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (230.15371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-256554 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-256554 logs -n 25: (1.575114948s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-038693 sudo                            | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-038693                                 | NoKubernetes-038693          | jenkins | v1.34.0 | 08 Oct 24 18:57 UTC | 08 Oct 24 18:58 UTC |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:58 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-439352                              | cert-expiration-439352       | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 19:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-966632             | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC | 08 Oct 24 18:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 18:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302431                           | kubernetes-upgrade-302431    | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-076496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:00 UTC |
	|         | disable-driver-mounts-076496                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:00 UTC | 08 Oct 24 19:01 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-783146            | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142496  | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC | 08 Oct 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:01 UTC |                     |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-966632                  | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-966632                                   | no-preload-966632            | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC | 08 Oct 24 19:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-256554        | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-783146                 | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142496       | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-783146                                  | embed-certs-783146           | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142496 | jenkins | v1.34.0 | 08 Oct 24 19:03 UTC | 08 Oct 24 19:13 UTC |
	|         | default-k8s-diff-port-142496                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-256554             | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC | 08 Oct 24 19:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-256554                              | old-k8s-version-256554       | jenkins | v1.34.0 | 08 Oct 24 19:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 19:04:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 19:04:21.655537  585386 out.go:345] Setting OutFile to fd 1 ...
	I1008 19:04:21.655668  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655678  585386 out.go:358] Setting ErrFile to fd 2...
	I1008 19:04:21.655683  585386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:04:21.655848  585386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 19:04:21.656345  585386 out.go:352] Setting JSON to false
	I1008 19:04:21.657364  585386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10014,"bootTime":1728404248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 19:04:21.657465  585386 start.go:139] virtualization: kvm guest
	I1008 19:04:21.659338  585386 out.go:177] * [old-k8s-version-256554] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 19:04:21.660519  585386 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 19:04:21.660551  585386 notify.go:220] Checking for updates...
	I1008 19:04:21.662703  585386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 19:04:21.663886  585386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:04:21.665044  585386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 19:04:21.666078  585386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 19:04:21.667173  585386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 19:04:21.668680  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:04:21.669052  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.669121  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.684192  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I1008 19:04:21.684604  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.685121  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.685143  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.685425  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.685598  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.687108  585386 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 19:04:21.688116  585386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 19:04:21.688399  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:04:21.688436  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:04:21.702827  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1008 19:04:21.703332  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:04:21.703801  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:04:21.703845  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:04:21.704216  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:04:21.704408  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:04:21.737212  585386 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 19:04:21.738219  585386 start.go:297] selected driver: kvm2
	I1008 19:04:21.738231  585386 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.738356  585386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 19:04:21.739025  585386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.739108  585386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 19:04:21.752700  585386 install.go:137] /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1008 19:04:21.753045  585386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:04:21.753088  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:04:21.753134  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:04:21.753170  585386 start.go:340] cluster config:
	{Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:04:21.753258  585386 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 19:04:21.754790  585386 out.go:177] * Starting "old-k8s-version-256554" primary control-plane node in "old-k8s-version-256554" cluster
	I1008 19:04:20.270613  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:23.342576  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:21.755891  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:04:21.755921  585386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 19:04:21.755930  585386 cache.go:56] Caching tarball of preloaded images
	I1008 19:04:21.756011  585386 preload.go:172] Found /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 19:04:21.756025  585386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1008 19:04:21.756114  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:04:21.756305  585386 start.go:360] acquireMachinesLock for old-k8s-version-256554: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:04:29.422638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:32.494606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:38.574600  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:41.646592  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:47.726606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:50.798595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:56.878669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:04:59.950607  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:06.030583  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:09.102584  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:15.182571  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:18.254590  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:24.334638  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:27.406606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:33.486619  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:36.558552  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:42.638565  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:45.710610  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:51.790561  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:05:54.862591  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:00.942606  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:04.014669  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:10.094618  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:13.166598  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:19.246573  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:22.318595  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:28.398732  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:31.470685  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:37.550574  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:40.622614  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:46.702620  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:49.774581  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:55.854627  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:06:58.926568  584371 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.141:22: connect: no route to host
	I1008 19:07:01.929445  585014 start.go:364] duration metric: took 3m15.782086174s to acquireMachinesLock for "embed-certs-783146"
	I1008 19:07:01.929517  585014 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:01.929523  585014 fix.go:54] fixHost starting: 
	I1008 19:07:01.929889  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:01.929945  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:01.945409  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I1008 19:07:01.945858  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:01.946357  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:01.946387  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:01.946744  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:01.946895  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:01.947028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:01.948399  585014 fix.go:112] recreateIfNeeded on embed-certs-783146: state=Stopped err=<nil>
	I1008 19:07:01.948419  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	W1008 19:07:01.948545  585014 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:01.954020  585014 out.go:177] * Restarting existing kvm2 VM for "embed-certs-783146" ...
	I1008 19:07:01.926825  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:01.926871  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927219  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:07:01.927270  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:07:01.927475  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:07:01.929278  584371 machine.go:96] duration metric: took 4m37.425232924s to provisionDockerMachine
	I1008 19:07:01.929341  584371 fix.go:56] duration metric: took 4m37.445578307s for fixHost
	I1008 19:07:01.929349  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 4m37.445609603s
	W1008 19:07:01.929369  584371 start.go:714] error starting host: provision: host is not running
	W1008 19:07:01.929510  584371 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1008 19:07:01.929524  584371 start.go:729] Will try again in 5 seconds ...
	I1008 19:07:01.955309  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Start
	I1008 19:07:01.955452  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring networks are active...
	I1008 19:07:01.956122  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network default is active
	I1008 19:07:01.956432  585014 main.go:141] libmachine: (embed-certs-783146) Ensuring network mk-embed-certs-783146 is active
	I1008 19:07:01.956743  585014 main.go:141] libmachine: (embed-certs-783146) Getting domain xml...
	I1008 19:07:01.957427  585014 main.go:141] libmachine: (embed-certs-783146) Creating domain...
	I1008 19:07:03.159229  585014 main.go:141] libmachine: (embed-certs-783146) Waiting to get IP...
	I1008 19:07:03.160116  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.160503  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.160565  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.160497  585935 retry.go:31] will retry after 282.873854ms: waiting for machine to come up
	I1008 19:07:03.445297  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.445810  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.445838  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.445740  585935 retry.go:31] will retry after 344.936527ms: waiting for machine to come up
	I1008 19:07:03.792413  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:03.792802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:03.792837  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:03.792741  585935 retry.go:31] will retry after 414.968289ms: waiting for machine to come up
	I1008 19:07:04.209200  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.209532  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.209555  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.209502  585935 retry.go:31] will retry after 403.180416ms: waiting for machine to come up
	I1008 19:07:04.614156  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:04.614679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:04.614713  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:04.614636  585935 retry.go:31] will retry after 631.841511ms: waiting for machine to come up
	I1008 19:07:05.248574  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.248983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.249015  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.248917  585935 retry.go:31] will retry after 639.776909ms: waiting for machine to come up
	I1008 19:07:05.890868  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:05.891332  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:05.891406  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:05.891329  585935 retry.go:31] will retry after 764.489176ms: waiting for machine to come up
	I1008 19:07:06.931497  584371 start.go:360] acquireMachinesLock for no-preload-966632: {Name:mk7c2f1a7556cb2456fe0a352698afb29cfcf496 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 19:07:06.657130  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:06.657520  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:06.657550  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:06.657462  585935 retry.go:31] will retry after 1.348973281s: waiting for machine to come up
	I1008 19:07:08.008293  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:08.008779  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:08.008805  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:08.008740  585935 retry.go:31] will retry after 1.146283289s: waiting for machine to come up
	I1008 19:07:09.157106  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:09.157517  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:09.157546  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:09.157493  585935 retry.go:31] will retry after 1.510430686s: waiting for machine to come up
	I1008 19:07:10.669393  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:10.669802  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:10.669831  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:10.669749  585935 retry.go:31] will retry after 2.380864418s: waiting for machine to come up
	I1008 19:07:13.053078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:13.053487  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:13.053512  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:13.053427  585935 retry.go:31] will retry after 2.553865951s: waiting for machine to come up
	I1008 19:07:15.610098  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:15.610501  585014 main.go:141] libmachine: (embed-certs-783146) DBG | unable to find current IP address of domain embed-certs-783146 in network mk-embed-certs-783146
	I1008 19:07:15.610535  585014 main.go:141] libmachine: (embed-certs-783146) DBG | I1008 19:07:15.610428  585935 retry.go:31] will retry after 4.018444789s: waiting for machine to come up
	I1008 19:07:20.967039  585096 start.go:364] duration metric: took 3m30.476693248s to acquireMachinesLock for "default-k8s-diff-port-142496"
	I1008 19:07:20.967105  585096 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:20.967115  585096 fix.go:54] fixHost starting: 
	I1008 19:07:20.967619  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:20.967675  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:20.984936  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1008 19:07:20.985358  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:20.985869  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:07:20.985896  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:20.986199  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:20.986380  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:20.986520  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:07:20.987828  585096 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142496: state=Stopped err=<nil>
	I1008 19:07:20.987867  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	W1008 19:07:20.988020  585096 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:20.990029  585096 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142496" ...
	I1008 19:07:19.632076  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632468  585014 main.go:141] libmachine: (embed-certs-783146) Found IP for machine: 192.168.72.183
	I1008 19:07:19.632504  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has current primary IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.632511  585014 main.go:141] libmachine: (embed-certs-783146) Reserving static IP address...
	I1008 19:07:19.632968  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.633020  585014 main.go:141] libmachine: (embed-certs-783146) DBG | skip adding static IP to network mk-embed-certs-783146 - found existing host DHCP lease matching {name: "embed-certs-783146", mac: "52:54:00:d8:06:4d", ip: "192.168.72.183"}
	I1008 19:07:19.633041  585014 main.go:141] libmachine: (embed-certs-783146) Reserved static IP address: 192.168.72.183
	I1008 19:07:19.633062  585014 main.go:141] libmachine: (embed-certs-783146) Waiting for SSH to be available...
	I1008 19:07:19.633073  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Getting to WaitForSSH function...
	I1008 19:07:19.634939  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635221  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.635249  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.635415  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH client type: external
	I1008 19:07:19.635453  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa (-rw-------)
	I1008 19:07:19.635496  585014 main.go:141] libmachine: (embed-certs-783146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:19.635509  585014 main.go:141] libmachine: (embed-certs-783146) DBG | About to run SSH command:
	I1008 19:07:19.635522  585014 main.go:141] libmachine: (embed-certs-783146) DBG | exit 0
	I1008 19:07:19.758276  585014 main.go:141] libmachine: (embed-certs-783146) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:19.758658  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetConfigRaw
	I1008 19:07:19.759310  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:19.761990  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.762456  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.762803  585014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/config.json ...
	I1008 19:07:19.763012  585014 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:19.763034  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:19.763271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.765523  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765829  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.765858  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.765988  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.766159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766289  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.766433  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.766589  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.766877  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.766891  585014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:19.866272  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:19.866297  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866563  585014 buildroot.go:166] provisioning hostname "embed-certs-783146"
	I1008 19:07:19.866585  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:19.866799  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.869295  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869648  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.869679  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.869836  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.870017  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870153  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.870293  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.870444  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.870621  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.870636  585014 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-783146 && echo "embed-certs-783146" | sudo tee /etc/hostname
	I1008 19:07:19.983892  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-783146
	
	I1008 19:07:19.983925  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:19.986430  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986776  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:19.986806  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:19.986922  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:19.987104  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987271  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:19.987417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:19.987588  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:19.987746  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:19.987762  585014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-783146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-783146/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-783146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:20.095178  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:20.095212  585014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:20.095264  585014 buildroot.go:174] setting up certificates
	I1008 19:07:20.095276  585014 provision.go:84] configureAuth start
	I1008 19:07:20.095288  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetMachineName
	I1008 19:07:20.095578  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.098000  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098431  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.098459  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.098591  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.100935  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101241  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.101271  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.101393  585014 provision.go:143] copyHostCerts
	I1008 19:07:20.101452  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:20.101463  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:20.101544  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:20.101807  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:20.101824  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:20.101873  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:20.102015  585014 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:20.102029  585014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:20.102075  585014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:20.102152  585014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.embed-certs-783146 san=[127.0.0.1 192.168.72.183 embed-certs-783146 localhost minikube]
	I1008 19:07:20.378020  585014 provision.go:177] copyRemoteCerts
	I1008 19:07:20.378093  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:20.378133  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.380678  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381017  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.381050  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.381175  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.381386  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.381579  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.381717  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.464627  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:20.487853  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:07:20.510174  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:07:20.532381  585014 provision.go:87] duration metric: took 437.094502ms to configureAuth
	I1008 19:07:20.532405  585014 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:20.532571  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:20.532669  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.535064  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.535382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.535559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.535753  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.535920  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.536039  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.536193  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.536406  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.536429  585014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:20.745937  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:20.745967  585014 machine.go:96] duration metric: took 982.940955ms to provisionDockerMachine
	I1008 19:07:20.745980  585014 start.go:293] postStartSetup for "embed-certs-783146" (driver="kvm2")
	I1008 19:07:20.745994  585014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:20.746012  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.746380  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:20.746417  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.749056  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749395  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.749425  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.749566  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.749738  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.749852  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.749943  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.828580  585014 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:20.832894  585014 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:20.832923  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:20.832994  585014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:20.833069  585014 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:20.833162  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:20.842230  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:20.864957  585014 start.go:296] duration metric: took 118.964089ms for postStartSetup
	I1008 19:07:20.865006  585014 fix.go:56] duration metric: took 18.93548189s for fixHost
	I1008 19:07:20.865029  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.867709  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868089  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.868113  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.868223  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.868425  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868583  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.868742  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.868926  585014 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:20.869159  585014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.183 22 <nil> <nil>}
	I1008 19:07:20.869175  585014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:20.966898  585014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414440.940275348
	
	I1008 19:07:20.966919  585014 fix.go:216] guest clock: 1728414440.940275348
	I1008 19:07:20.966926  585014 fix.go:229] Guest: 2024-10-08 19:07:20.940275348 +0000 UTC Remote: 2024-10-08 19:07:20.865011917 +0000 UTC m=+214.857488447 (delta=75.263431ms)
	I1008 19:07:20.966948  585014 fix.go:200] guest clock delta is within tolerance: 75.263431ms
	I1008 19:07:20.966953  585014 start.go:83] releasing machines lock for "embed-certs-783146", held for 19.037463535s
	I1008 19:07:20.966979  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.967246  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:20.969983  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970357  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.970386  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.970586  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971061  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971243  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:20.971340  585014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:20.971382  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.971487  585014 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:20.971515  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:20.974211  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974581  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974632  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.974695  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.974872  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.974999  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:20.975024  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:20.975028  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975184  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975228  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:20.975374  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:20.975501  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:20.975559  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:20.975709  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:21.072152  585014 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:21.078116  585014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:21.221176  585014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:21.227359  585014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:21.227434  585014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:21.242691  585014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:21.242716  585014 start.go:495] detecting cgroup driver to use...
	I1008 19:07:21.242796  585014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:21.257429  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:21.270208  585014 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:21.270258  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:21.282826  585014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:21.295827  585014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:21.405804  585014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:21.572147  585014 docker.go:233] disabling docker service ...
	I1008 19:07:21.572231  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:21.586083  585014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:21.598657  585014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:21.722224  585014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:21.853317  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:21.867234  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:21.884872  585014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:21.884949  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.895154  585014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:21.895223  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.905371  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.915602  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.926026  585014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:21.938089  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.949261  585014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.966211  585014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:21.978120  585014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:21.987631  585014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:21.987693  585014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:22.002185  585014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:22.013111  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:22.135933  585014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:22.230256  585014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:22.230342  585014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:22.235005  585014 start.go:563] Will wait 60s for crictl version
	I1008 19:07:22.235076  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:07:22.238991  585014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:22.279302  585014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:22.279391  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.308343  585014 ssh_runner.go:195] Run: crio --version
	I1008 19:07:22.337272  585014 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:20.991759  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Start
	I1008 19:07:20.991997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring networks are active...
	I1008 19:07:20.992703  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network default is active
	I1008 19:07:20.993057  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Ensuring network mk-default-k8s-diff-port-142496 is active
	I1008 19:07:20.993435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Getting domain xml...
	I1008 19:07:20.994209  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Creating domain...
	I1008 19:07:22.240185  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting to get IP...
	I1008 19:07:22.240949  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241417  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.241469  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.241382  586083 retry.go:31] will retry after 234.248435ms: waiting for machine to come up
	I1008 19:07:22.476800  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477343  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.477375  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.477275  586083 retry.go:31] will retry after 323.851452ms: waiting for machine to come up
	I1008 19:07:22.802997  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803574  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:22.803610  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:22.803516  586083 retry.go:31] will retry after 445.299956ms: waiting for machine to come up
	I1008 19:07:23.250211  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250686  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.250715  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.250651  586083 retry.go:31] will retry after 574.786836ms: waiting for machine to come up
	I1008 19:07:23.827535  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828010  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:23.828039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:23.827959  586083 retry.go:31] will retry after 563.165045ms: waiting for machine to come up
	I1008 19:07:24.393150  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393741  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.393792  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.393717  586083 retry.go:31] will retry after 576.443855ms: waiting for machine to come up
	I1008 19:07:24.971698  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972132  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:24.972161  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:24.972090  586083 retry.go:31] will retry after 999.17904ms: waiting for machine to come up
	I1008 19:07:22.338812  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetIP
	I1008 19:07:22.341998  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342382  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:22.342417  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:22.342680  585014 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:22.346863  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:22.359456  585014 kubeadm.go:883] updating cluster {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:22.359630  585014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:22.359692  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:22.394832  585014 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:22.394893  585014 ssh_runner.go:195] Run: which lz4
	I1008 19:07:22.398935  585014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:22.403100  585014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:22.403127  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:23.771685  585014 crio.go:462] duration metric: took 1.372780034s to copy over tarball
	I1008 19:07:23.771769  585014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:25.816508  585014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.044704362s)
	I1008 19:07:25.816547  585014 crio.go:469] duration metric: took 2.04482777s to extract the tarball
	I1008 19:07:25.816557  585014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:25.852980  585014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:25.893366  585014 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:25.893391  585014 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:25.893399  585014 kubeadm.go:934] updating node { 192.168.72.183 8443 v1.31.1 crio true true} ...
	I1008 19:07:25.893517  585014 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-783146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:25.893579  585014 ssh_runner.go:195] Run: crio config
	I1008 19:07:25.934828  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:25.934850  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:25.934874  585014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:25.934906  585014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.183 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-783146 NodeName:embed-certs-783146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:25.935039  585014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-783146"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:25.935106  585014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:25.944851  585014 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:25.944919  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:25.954022  585014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1008 19:07:25.979675  585014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:26.001147  585014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1008 19:07:26.017613  585014 ssh_runner.go:195] Run: grep 192.168.72.183	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:26.021401  585014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:26.033347  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:25.972405  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972868  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:25.972891  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:25.972831  586083 retry.go:31] will retry after 1.186801161s: waiting for machine to come up
	I1008 19:07:27.161319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161877  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:27.161900  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:27.161823  586083 retry.go:31] will retry after 1.448383195s: waiting for machine to come up
	I1008 19:07:28.611319  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:28.611697  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:28.611613  586083 retry.go:31] will retry after 1.738948191s: waiting for machine to come up
	I1008 19:07:30.352081  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352582  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:30.352617  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:30.352530  586083 retry.go:31] will retry after 2.624799898s: waiting for machine to come up
	I1008 19:07:26.138298  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:26.154419  585014 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146 for IP: 192.168.72.183
	I1008 19:07:26.154447  585014 certs.go:194] generating shared ca certs ...
	I1008 19:07:26.154470  585014 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:26.154651  585014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:26.154714  585014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:26.154729  585014 certs.go:256] generating profile certs ...
	I1008 19:07:26.154860  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/client.key
	I1008 19:07:26.154948  585014 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key.b07aac04
	I1008 19:07:26.155003  585014 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key
	I1008 19:07:26.155159  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:26.155202  585014 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:26.155212  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:26.155232  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:26.155256  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:26.155280  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:26.155319  585014 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:26.156076  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:26.187225  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:26.235804  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:26.268034  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:26.292729  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 19:07:26.320118  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:26.351058  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:26.374004  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/embed-certs-783146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:07:26.396526  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:26.419067  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:26.441449  585014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:26.463768  585014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:26.479471  585014 ssh_runner.go:195] Run: openssl version
	I1008 19:07:26.484957  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:26.495286  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501166  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.501225  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:26.507154  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:26.517587  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:26.528157  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532896  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.532967  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:26.540724  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:26.554952  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:26.567160  585014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571304  585014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.571394  585014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:26.576974  585014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:26.587198  585014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:26.591621  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:26.597176  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:26.602766  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:26.608373  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:26.613797  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:26.619310  585014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:26.624702  585014 kubeadm.go:392] StartCluster: {Name:embed-certs-783146 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-783146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:26.624831  585014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:26.624878  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.666183  585014 cri.go:89] found id: ""
	I1008 19:07:26.666253  585014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:26.676621  585014 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:26.676644  585014 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:26.676699  585014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:26.686549  585014 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:26.687532  585014 kubeconfig.go:125] found "embed-certs-783146" server: "https://192.168.72.183:8443"
	I1008 19:07:26.689545  585014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:26.698758  585014 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.183
	I1008 19:07:26.698790  585014 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:26.698804  585014 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:26.698856  585014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:26.738148  585014 cri.go:89] found id: ""
	I1008 19:07:26.738209  585014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:26.753980  585014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:26.763186  585014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:26.763208  585014 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:26.763257  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:07:26.771789  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:26.771847  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:26.780812  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:07:26.789329  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:26.789390  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:26.798230  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.806781  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:26.806842  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:26.815549  585014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:07:26.823782  585014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:26.823830  585014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:26.832698  585014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:26.841687  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:26.945569  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.159232  585014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213619978s)
	I1008 19:07:28.159280  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.372727  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.456082  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:28.567486  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:28.567627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.067909  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:29.568466  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.068627  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.567821  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:30.604366  585014 api_server.go:72] duration metric: took 2.036885191s to wait for apiserver process to appear ...
	I1008 19:07:30.604403  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:30.604440  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.461223  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.461270  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.461286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.499425  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:33.499473  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:33.604563  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:33.614594  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:33.614625  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.105286  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.111706  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:34.111747  585014 api_server.go:103] status: https://192.168.72.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:34.605326  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:07:34.612912  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:07:34.619204  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:34.619227  585014 api_server.go:131] duration metric: took 4.014816798s to wait for apiserver health ...
	I1008 19:07:34.619236  585014 cni.go:84] Creating CNI manager for ""
	I1008 19:07:34.619242  585014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:34.621043  585014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:32.980593  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981141  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:32.981171  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:32.981076  586083 retry.go:31] will retry after 3.401015855s: waiting for machine to come up
	I1008 19:07:34.622500  585014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:34.632627  585014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:34.654975  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:34.667824  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:34.667853  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:34.667863  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:34.667874  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:34.667879  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:34.667884  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:34.667890  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:34.667899  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:34.667904  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:34.667910  585014 system_pods.go:74] duration metric: took 12.913884ms to wait for pod list to return data ...
	I1008 19:07:34.667919  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:34.672996  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:34.673018  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:34.673029  585014 node_conditions.go:105] duration metric: took 5.105827ms to run NodePressure ...
	I1008 19:07:34.673045  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:34.992309  585014 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996835  585014 kubeadm.go:739] kubelet initialised
	I1008 19:07:34.996861  585014 kubeadm.go:740] duration metric: took 4.524726ms waiting for restarted kubelet to initialise ...
	I1008 19:07:34.996870  585014 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:35.005255  585014 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.012539  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012568  585014 pod_ready.go:82] duration metric: took 7.278613ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.012580  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.012589  585014 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.018465  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018489  585014 pod_ready.go:82] duration metric: took 5.8848ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.018500  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "etcd-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.018509  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.026503  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026533  585014 pod_ready.go:82] duration metric: took 8.012156ms for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.026544  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.026555  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.058419  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058449  585014 pod_ready.go:82] duration metric: took 31.879605ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.058463  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.058471  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.458244  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458275  585014 pod_ready.go:82] duration metric: took 399.794285ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.458286  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-proxy-9l7t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.458292  585014 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:35.858567  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858612  585014 pod_ready.go:82] duration metric: took 400.312425ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:35.858625  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:35.858637  585014 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:36.258490  585014 pod_ready.go:98] node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258520  585014 pod_ready.go:82] duration metric: took 399.870797ms for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:07:36.258530  585014 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-783146" hosting pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:36.258538  585014 pod_ready.go:39] duration metric: took 1.261659261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:36.258558  585014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:07:36.269993  585014 ops.go:34] apiserver oom_adj: -16
	I1008 19:07:36.270016  585014 kubeadm.go:597] duration metric: took 9.593365367s to restartPrimaryControlPlane
	I1008 19:07:36.270025  585014 kubeadm.go:394] duration metric: took 9.645330227s to StartCluster
	I1008 19:07:36.270044  585014 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.270125  585014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:07:36.271682  585014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:36.271945  585014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.183 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:07:36.272024  585014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:07:36.272130  585014 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-783146"
	I1008 19:07:36.272158  585014 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-783146"
	W1008 19:07:36.272166  585014 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:07:36.272152  585014 addons.go:69] Setting default-storageclass=true in profile "embed-certs-783146"
	I1008 19:07:36.272179  585014 addons.go:69] Setting metrics-server=true in profile "embed-certs-783146"
	I1008 19:07:36.272198  585014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-783146"
	I1008 19:07:36.272203  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272213  585014 addons.go:234] Setting addon metrics-server=true in "embed-certs-783146"
	W1008 19:07:36.272224  585014 addons.go:243] addon metrics-server should already be in state true
	I1008 19:07:36.272256  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.272187  585014 config.go:182] Loaded profile config "embed-certs-783146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:36.272616  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272638  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272658  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272689  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.272694  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.272738  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.274263  585014 out.go:177] * Verifying Kubernetes components...
	I1008 19:07:36.275444  585014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:36.288219  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1008 19:07:36.288686  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.289297  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.289328  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.289721  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.290415  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.290462  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.293043  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I1008 19:07:36.293374  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I1008 19:07:36.293461  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293721  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.293954  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.293978  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294188  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.294212  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.294299  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294504  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.294534  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.294982  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.295028  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.297638  585014 addons.go:234] Setting addon default-storageclass=true in "embed-certs-783146"
	W1008 19:07:36.297661  585014 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:07:36.297692  585014 host.go:66] Checking if "embed-certs-783146" exists ...
	I1008 19:07:36.298042  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.298081  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.309286  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I1008 19:07:36.309776  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310024  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I1008 19:07:36.310337  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310360  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.310478  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.310771  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.310980  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.310997  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.311013  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.311330  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.311500  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.313004  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1008 19:07:36.313159  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313368  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.313523  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.313926  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.313951  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.314284  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.314777  585014 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:36.314820  585014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:36.314992  585014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:07:36.315010  585014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:07:36.316168  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:07:36.316191  585014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:07:36.316212  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.316309  585014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.316333  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:07:36.316352  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.320088  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320418  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320566  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320591  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320733  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.320888  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.320912  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.320931  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321074  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.321181  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321235  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.321400  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.321397  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.321532  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.331532  585014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1008 19:07:36.331881  585014 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:36.332309  585014 main.go:141] libmachine: Using API Version  1
	I1008 19:07:36.332331  585014 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:36.332724  585014 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:36.332929  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetState
	I1008 19:07:36.334589  585014 main.go:141] libmachine: (embed-certs-783146) Calling .DriverName
	I1008 19:07:36.334775  585014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.334797  585014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:07:36.334811  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHHostname
	I1008 19:07:36.337675  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338078  585014 main.go:141] libmachine: (embed-certs-783146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:06:4d", ip: ""} in network mk-embed-certs-783146: {Iface:virbr1 ExpiryTime:2024-10-08 20:07:12 +0000 UTC Type:0 Mac:52:54:00:d8:06:4d Iaid: IPaddr:192.168.72.183 Prefix:24 Hostname:embed-certs-783146 Clientid:01:52:54:00:d8:06:4d}
	I1008 19:07:36.338093  585014 main.go:141] libmachine: (embed-certs-783146) DBG | domain embed-certs-783146 has defined IP address 192.168.72.183 and MAC address 52:54:00:d8:06:4d in network mk-embed-certs-783146
	I1008 19:07:36.338209  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHPort
	I1008 19:07:36.338380  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHKeyPath
	I1008 19:07:36.338491  585014 main.go:141] libmachine: (embed-certs-783146) Calling .GetSSHUsername
	I1008 19:07:36.338600  585014 sshutil.go:53] new ssh client: &{IP:192.168.72.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/embed-certs-783146/id_rsa Username:docker}
	I1008 19:07:36.444532  585014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:36.462719  585014 node_ready.go:35] waiting up to 6m0s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:36.519485  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:07:36.613714  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:07:36.613738  585014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:07:36.637773  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:07:36.645883  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:07:36.645907  585014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:07:36.685924  585014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.685952  585014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:07:36.710461  585014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:07:36.970231  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970256  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970563  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970589  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970599  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.970606  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.970860  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.970881  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:36.970892  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980520  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:36.980538  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:36.980826  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:36.980869  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:36.980888  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.676577  585014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038767196s)
	I1008 19:07:37.676633  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.676646  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.676972  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.676982  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677040  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677058  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.677075  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.677333  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.677351  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.677375  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689600  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689615  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.689883  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.689897  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.689901  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.689917  585014 main.go:141] libmachine: Making call to close driver server
	I1008 19:07:37.689934  585014 main.go:141] libmachine: (embed-certs-783146) Calling .Close
	I1008 19:07:37.690210  585014 main.go:141] libmachine: (embed-certs-783146) DBG | Closing plugin on server side
	I1008 19:07:37.690227  585014 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:07:37.690240  585014 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:07:37.690256  585014 addons.go:475] Verifying addon metrics-server=true in "embed-certs-783146"
	I1008 19:07:37.692035  585014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1008 19:07:36.383659  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.383993  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | unable to find current IP address of domain default-k8s-diff-port-142496 in network mk-default-k8s-diff-port-142496
	I1008 19:07:36.384026  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | I1008 19:07:36.383939  586083 retry.go:31] will retry after 3.325274435s: waiting for machine to come up
	I1008 19:07:39.713420  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.713902  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Found IP for machine: 192.168.50.213
	I1008 19:07:39.713926  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserving static IP address...
	I1008 19:07:39.713945  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has current primary IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.714332  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.714362  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Reserved static IP address: 192.168.50.213
	I1008 19:07:39.714382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | skip adding static IP to network mk-default-k8s-diff-port-142496 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142496", mac: "52:54:00:14:28:c1", ip: "192.168.50.213"}
	I1008 19:07:39.714401  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Getting to WaitForSSH function...
	I1008 19:07:39.714415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Waiting for SSH to be available...
	I1008 19:07:39.716542  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.716905  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.716951  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.717025  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH client type: external
	I1008 19:07:39.717052  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa (-rw-------)
	I1008 19:07:39.717111  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:07:39.717147  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | About to run SSH command:
	I1008 19:07:39.717165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | exit 0
	I1008 19:07:39.842089  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | SSH cmd err, output: <nil>: 
	I1008 19:07:39.842499  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetConfigRaw
	I1008 19:07:39.843125  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:39.845604  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.845976  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.846008  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.846276  585096 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/config.json ...
	I1008 19:07:39.846509  585096 machine.go:93] provisionDockerMachine start ...
	I1008 19:07:39.846541  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:39.846768  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.849107  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849411  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.849435  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.849743  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.849924  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850084  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.850236  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.850422  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.850679  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.850695  585096 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:07:39.950481  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:07:39.950507  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.950796  585096 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142496"
	I1008 19:07:39.950825  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:39.951016  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:39.953300  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:39.953678  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:39.953833  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:39.954002  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954168  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:39.954297  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:39.954450  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:39.954621  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:39.954636  585096 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142496 && echo "default-k8s-diff-port-142496" | sudo tee /etc/hostname
	I1008 19:07:40.068848  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142496
	
	I1008 19:07:40.068876  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.071855  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072195  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.072226  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.072392  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.072563  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072746  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.072871  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.073039  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.073237  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.073257  585096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:07:40.183039  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:07:40.183073  585096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:07:40.183116  585096 buildroot.go:174] setting up certificates
	I1008 19:07:40.183131  585096 provision.go:84] configureAuth start
	I1008 19:07:40.183146  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetMachineName
	I1008 19:07:40.183451  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:40.185904  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186264  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.186284  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.186453  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.188672  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.189037  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.189134  585096 provision.go:143] copyHostCerts
	I1008 19:07:40.189204  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:07:40.189217  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:07:40.189281  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:07:40.189427  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:07:40.189441  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:07:40.189474  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:07:40.189563  585096 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:07:40.189573  585096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:07:40.189600  585096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:07:40.189679  585096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142496 san=[127.0.0.1 192.168.50.213 default-k8s-diff-port-142496 localhost minikube]
	I1008 19:07:41.022737  585386 start.go:364] duration metric: took 3m19.266396441s to acquireMachinesLock for "old-k8s-version-256554"
	I1008 19:07:41.022813  585386 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:07:41.022825  585386 fix.go:54] fixHost starting: 
	I1008 19:07:41.023256  585386 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:07:41.023314  585386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:07:41.043293  585386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1008 19:07:41.043909  585386 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:07:41.044404  585386 main.go:141] libmachine: Using API Version  1
	I1008 19:07:41.044434  585386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:07:41.044781  585386 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:07:41.044975  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:07:41.045145  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetState
	I1008 19:07:41.046596  585386 fix.go:112] recreateIfNeeded on old-k8s-version-256554: state=Stopped err=<nil>
	I1008 19:07:41.046624  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	W1008 19:07:41.046776  585386 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:07:37.693230  585014 addons.go:510] duration metric: took 1.421218857s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1008 19:07:38.466754  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:40.967492  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:41.048525  585386 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-256554" ...
	I1008 19:07:41.049635  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .Start
	I1008 19:07:41.049774  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring networks are active...
	I1008 19:07:41.050594  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network default is active
	I1008 19:07:41.051045  585386 main.go:141] libmachine: (old-k8s-version-256554) Ensuring network mk-old-k8s-version-256554 is active
	I1008 19:07:41.051577  585386 main.go:141] libmachine: (old-k8s-version-256554) Getting domain xml...
	I1008 19:07:41.052331  585386 main.go:141] libmachine: (old-k8s-version-256554) Creating domain...
	I1008 19:07:40.418969  585096 provision.go:177] copyRemoteCerts
	I1008 19:07:40.419032  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:07:40.419060  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.421382  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421701  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.421730  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.421912  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.422108  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.422287  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.422426  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.500533  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:07:40.524199  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 19:07:40.547495  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:07:40.570656  585096 provision.go:87] duration metric: took 387.509086ms to configureAuth
	I1008 19:07:40.570687  585096 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:07:40.570859  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:07:40.570934  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.573578  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.573941  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.573970  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.574088  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.574290  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574534  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.574680  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.574881  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.575056  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.575074  585096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:07:40.795575  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:07:40.795604  585096 machine.go:96] duration metric: took 949.073836ms to provisionDockerMachine
	I1008 19:07:40.795618  585096 start.go:293] postStartSetup for "default-k8s-diff-port-142496" (driver="kvm2")
	I1008 19:07:40.795629  585096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:07:40.795646  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:40.796003  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:07:40.796042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.798307  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798635  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.798666  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.798881  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.799039  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.799249  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.799369  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:40.880470  585096 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:07:40.884632  585096 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:07:40.884660  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:07:40.884719  585096 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:07:40.884834  585096 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:07:40.884947  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:07:40.893828  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:40.917278  585096 start.go:296] duration metric: took 121.644332ms for postStartSetup
	I1008 19:07:40.917320  585096 fix.go:56] duration metric: took 19.950206082s for fixHost
	I1008 19:07:40.917342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:40.919971  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920315  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:40.920342  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:40.920539  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:40.920782  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.920969  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:40.921114  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:40.921292  585096 main.go:141] libmachine: Using SSH client type: native
	I1008 19:07:40.921519  585096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I1008 19:07:40.921535  585096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:07:41.022573  585096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414460.977520721
	
	I1008 19:07:41.022596  585096 fix.go:216] guest clock: 1728414460.977520721
	I1008 19:07:41.022603  585096 fix.go:229] Guest: 2024-10-08 19:07:40.977520721 +0000 UTC Remote: 2024-10-08 19:07:40.917324605 +0000 UTC m=+230.557951471 (delta=60.196116ms)
	I1008 19:07:41.022627  585096 fix.go:200] guest clock delta is within tolerance: 60.196116ms
	I1008 19:07:41.022634  585096 start.go:83] releasing machines lock for "default-k8s-diff-port-142496", held for 20.055558507s
	I1008 19:07:41.022665  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.022896  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:41.025861  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026272  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.026301  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.026479  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027126  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:07:41.027537  585096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:07:41.027581  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.027725  585096 ssh_runner.go:195] Run: cat /version.json
	I1008 19:07:41.027749  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:07:41.030474  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.030745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031094  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031123  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031148  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:41.031165  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:41.031322  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031430  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:07:41.031511  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031572  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:07:41.031670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031745  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:07:41.031827  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.031883  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:07:41.135368  585096 ssh_runner.go:195] Run: systemctl --version
	I1008 19:07:41.141492  585096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:07:41.288617  585096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:07:41.295482  585096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:07:41.295550  585096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:07:41.310709  585096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:07:41.310738  585096 start.go:495] detecting cgroup driver to use...
	I1008 19:07:41.310821  585096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:07:41.328574  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:07:41.342506  585096 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:07:41.342564  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:07:41.356308  585096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:07:41.372510  585096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:07:41.497084  585096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:07:41.665187  585096 docker.go:233] disabling docker service ...
	I1008 19:07:41.665272  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:07:41.682309  585096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:07:41.702567  585096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:07:41.882727  585096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:07:42.006479  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:07:42.020474  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:07:42.039750  585096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:07:42.039834  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.050395  585096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:07:42.050449  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.060572  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.071974  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.083208  585096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:07:42.097166  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.110090  585096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.128424  585096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:07:42.139296  585096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:07:42.148278  585096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:07:42.148320  585096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:07:42.164007  585096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:07:42.173218  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:42.303890  585096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:07:42.412074  585096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:07:42.412155  585096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:07:42.418606  585096 start.go:563] Will wait 60s for crictl version
	I1008 19:07:42.418662  585096 ssh_runner.go:195] Run: which crictl
	I1008 19:07:42.422670  585096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:07:42.469322  585096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:07:42.469432  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.501089  585096 ssh_runner.go:195] Run: crio --version
	I1008 19:07:42.530412  585096 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:07:42.531554  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetIP
	I1008 19:07:42.534587  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.534928  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:07:42.534968  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:07:42.535235  585096 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1008 19:07:42.539279  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:42.552259  585096 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:07:42.552380  585096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:07:42.552447  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:42.588849  585096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:07:42.588928  585096 ssh_runner.go:195] Run: which lz4
	I1008 19:07:42.592785  585096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:07:42.597089  585096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:07:42.597119  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1008 19:07:44.003959  585096 crio.go:462] duration metric: took 1.411213503s to copy over tarball
	I1008 19:07:44.004075  585096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:07:43.467315  585014 node_ready.go:53] node "embed-certs-783146" has status "Ready":"False"
	I1008 19:07:43.975147  585014 node_ready.go:49] node "embed-certs-783146" has status "Ready":"True"
	I1008 19:07:43.975180  585014 node_ready.go:38] duration metric: took 7.512429362s for node "embed-certs-783146" to be "Ready" ...
	I1008 19:07:43.975194  585014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:43.982537  585014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999539  585014 pod_ready.go:93] pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:43.999566  585014 pod_ready.go:82] duration metric: took 16.995034ms for pod "coredns-7c65d6cfc9-kh9nk" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:43.999578  585014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506007  585014 pod_ready.go:93] pod "etcd-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:44.506032  585014 pod_ready.go:82] duration metric: took 506.447262ms for pod "etcd-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:44.506043  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:42.338440  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting to get IP...
	I1008 19:07:42.339286  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.339700  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.339756  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.339684  586305 retry.go:31] will retry after 311.669023ms: waiting for machine to come up
	I1008 19:07:42.653048  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:42.653467  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:42.653494  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:42.653424  586305 retry.go:31] will retry after 361.669647ms: waiting for machine to come up
	I1008 19:07:43.017062  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.017807  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.017840  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.017749  586305 retry.go:31] will retry after 469.651076ms: waiting for machine to come up
	I1008 19:07:43.489336  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.489906  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.489930  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.489809  586305 retry.go:31] will retry after 456.412702ms: waiting for machine to come up
	I1008 19:07:43.948406  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:43.949007  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:43.949031  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:43.948945  586305 retry.go:31] will retry after 717.872812ms: waiting for machine to come up
	I1008 19:07:44.668850  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:44.669423  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:44.669452  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:44.669335  586305 retry.go:31] will retry after 892.723806ms: waiting for machine to come up
	I1008 19:07:45.563628  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:45.564069  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:45.564093  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:45.564036  586305 retry.go:31] will retry after 1.114305551s: waiting for machine to come up
	I1008 19:07:46.159478  585096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155358377s)
	I1008 19:07:46.159512  585096 crio.go:469] duration metric: took 2.155494994s to extract the tarball
	I1008 19:07:46.159532  585096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:07:46.196073  585096 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:07:46.239224  585096 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 19:07:46.239253  585096 cache_images.go:84] Images are preloaded, skipping loading
	I1008 19:07:46.239263  585096 kubeadm.go:934] updating node { 192.168.50.213 8444 v1.31.1 crio true true} ...
	I1008 19:07:46.239412  585096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:07:46.239482  585096 ssh_runner.go:195] Run: crio config
	I1008 19:07:46.284916  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:46.284941  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:46.284959  585096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:07:46.284980  585096 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142496 NodeName:default-k8s-diff-port-142496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:07:46.285145  585096 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142496"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:07:46.285218  585096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:07:46.295176  585096 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:07:46.295278  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:07:46.304340  585096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1008 19:07:46.320234  585096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:07:46.336215  585096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1008 19:07:46.352435  585096 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I1008 19:07:46.355991  585096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:07:46.367424  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:07:46.491070  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:07:46.509165  585096 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496 for IP: 192.168.50.213
	I1008 19:07:46.509192  585096 certs.go:194] generating shared ca certs ...
	I1008 19:07:46.509213  585096 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:07:46.509413  585096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:07:46.509488  585096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:07:46.509507  585096 certs.go:256] generating profile certs ...
	I1008 19:07:46.509642  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/client.key
	I1008 19:07:46.509724  585096 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key.8b79a92b
	I1008 19:07:46.509806  585096 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key
	I1008 19:07:46.510014  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:07:46.510069  585096 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:07:46.510082  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:07:46.510109  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:07:46.510154  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:07:46.510177  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:07:46.510220  585096 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:07:46.510965  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:07:46.548979  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:07:46.588042  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:07:46.617201  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:07:46.645499  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 19:07:46.673075  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:07:46.705336  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:07:46.727739  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/default-k8s-diff-port-142496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:07:46.755352  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:07:46.782421  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:07:46.804813  585096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:07:46.827321  585096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:07:46.843375  585096 ssh_runner.go:195] Run: openssl version
	I1008 19:07:46.848936  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:07:46.860851  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865320  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.865379  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:07:46.871107  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:07:46.881518  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:07:46.891868  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.895991  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.896026  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:07:46.901219  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:07:46.914282  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:07:46.925095  585096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929407  585096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.929465  585096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:07:46.934778  585096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:07:46.946807  585096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:07:46.951173  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:07:46.957072  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:07:46.962822  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:07:46.968584  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:07:46.974679  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:07:46.980081  585096 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:07:46.985537  585096 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-142496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-142496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:07:46.985659  585096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:07:46.985706  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.025838  585096 cri.go:89] found id: ""
	I1008 19:07:47.025924  585096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:07:47.037778  585096 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:07:47.037800  585096 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:07:47.037847  585096 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:07:47.049787  585096 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:07:47.050778  585096 kubeconfig.go:125] found "default-k8s-diff-port-142496" server: "https://192.168.50.213:8444"
	I1008 19:07:47.052921  585096 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:07:47.062696  585096 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.213
	I1008 19:07:47.062747  585096 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:07:47.062775  585096 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:07:47.062822  585096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:07:47.101981  585096 cri.go:89] found id: ""
	I1008 19:07:47.102054  585096 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:07:47.119421  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:07:47.129168  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:07:47.129189  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:07:47.129253  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:07:47.138071  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:07:47.138125  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:07:47.147202  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:07:47.155923  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:07:47.155979  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:07:47.164829  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.173366  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:07:47.173413  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:07:47.182417  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:07:47.191170  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:07:47.191228  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:07:47.200115  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:07:47.209146  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:47.314572  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.318198  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.003546788s)
	I1008 19:07:48.318245  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.533505  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.617977  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:48.743670  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:07:48.743782  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.244765  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:49.744287  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.243920  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:46.513648  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:49.013409  585014 pod_ready.go:103] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:50.422334  585014 pod_ready.go:93] pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.422364  585014 pod_ready.go:82] duration metric: took 5.916314463s for pod "kube-apiserver-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.422379  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929739  585014 pod_ready.go:93] pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.929775  585014 pod_ready.go:82] duration metric: took 507.386631ms for pod "kube-controller-manager-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.929790  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935612  585014 pod_ready.go:93] pod "kube-proxy-9l7t7" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.935638  585014 pod_ready.go:82] duration metric: took 5.84081ms for pod "kube-proxy-9l7t7" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.935650  585014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941106  585014 pod_ready.go:93] pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace has status "Ready":"True"
	I1008 19:07:50.941131  585014 pod_ready.go:82] duration metric: took 5.47259ms for pod "kube-scheduler-embed-certs-783146" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:50.941143  585014 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:46.679480  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:46.679970  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:46.679999  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:46.679928  586305 retry.go:31] will retry after 1.263473932s: waiting for machine to come up
	I1008 19:07:47.945302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:47.945747  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:47.945784  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:47.945685  586305 retry.go:31] will retry after 1.499818519s: waiting for machine to come up
	I1008 19:07:49.447215  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:49.447595  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:49.447616  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:49.447550  586305 retry.go:31] will retry after 1.658759297s: waiting for machine to come up
	I1008 19:07:51.108028  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:51.108466  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:51.108499  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:51.108430  586305 retry.go:31] will retry after 2.783310271s: waiting for machine to come up
	I1008 19:07:50.744524  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:07:50.830124  585096 api_server.go:72] duration metric: took 2.086461343s to wait for apiserver process to appear ...
	I1008 19:07:50.830161  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:07:50.830192  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:50.830915  585096 api_server.go:269] stopped: https://192.168.50.213:8444/healthz: Get "https://192.168.50.213:8444/healthz": dial tcp 192.168.50.213:8444: connect: connection refused
	I1008 19:07:51.331031  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.027442  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.027468  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.027483  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.101043  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:07:54.101073  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:07:54.330385  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.335009  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.335035  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:54.830407  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:54.835912  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:07:54.835939  585096 api_server.go:103] status: https://192.168.50.213:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:07:55.330454  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:07:55.336271  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:07:55.343556  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:07:55.343586  585096 api_server.go:131] duration metric: took 4.513416619s to wait for apiserver health ...
	I1008 19:07:55.343604  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:07:55.343612  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:07:55.345259  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:07:55.346612  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:07:55.357899  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:07:55.383903  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:07:52.948407  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:55.449059  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:53.895592  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:53.896059  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:53.896088  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:53.896010  586305 retry.go:31] will retry after 2.642423841s: waiting for machine to come up
	I1008 19:07:56.540104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:07:56.540507  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | unable to find current IP address of domain old-k8s-version-256554 in network mk-old-k8s-version-256554
	I1008 19:07:56.540547  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | I1008 19:07:56.540452  586305 retry.go:31] will retry after 3.959898173s: waiting for machine to come up
	I1008 19:07:55.397903  585096 system_pods.go:59] 8 kube-system pods found
	I1008 19:07:55.397935  585096 system_pods.go:61] "coredns-7c65d6cfc9-tkg8j" [0b436a1f-2b8e-4a5f-8063-695480275f2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:07:55.397944  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [cc702ae5-7e74-4a18-942e-1d236d39c43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:07:55.397952  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [da72d2f3-aab5-42c3-9733-7c0ce470e61e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:07:55.397959  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [de964717-b4de-4c7c-a9b5-164e7a048d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:07:55.397966  585096 system_pods.go:61] "kube-proxy-lwggr" [d5d96599-c3d3-4eba-a2ad-0c027e8ef1ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 19:07:55.397971  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [9218d69d-97ca-4680-856b-95c43fa371ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:07:55.397976  585096 system_pods.go:61] "metrics-server-6867b74b74-pfc2c" [9bafd6da-a33e-4182-a0d7-5e4c9473f057] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:07:55.397982  585096 system_pods.go:61] "storage-provisioner" [b60980ab-2552-404e-b351-4b163a075732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 19:07:55.397988  585096 system_pods.go:74] duration metric: took 14.056648ms to wait for pod list to return data ...
	I1008 19:07:55.397997  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:07:55.403870  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:07:55.403906  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:07:55.403920  585096 node_conditions.go:105] duration metric: took 5.917994ms to run NodePressure ...
	I1008 19:07:55.403941  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:07:55.677555  585096 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682514  585096 kubeadm.go:739] kubelet initialised
	I1008 19:07:55.682539  585096 kubeadm.go:740] duration metric: took 4.953783ms waiting for restarted kubelet to initialise ...
	I1008 19:07:55.682550  585096 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:07:55.688641  585096 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:07:57.695361  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.195582  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:07:57.948167  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.446946  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:00.504139  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504539  585386 main.go:141] libmachine: (old-k8s-version-256554) Found IP for machine: 192.168.39.90
	I1008 19:08:00.504570  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has current primary IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.504578  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserving static IP address...
	I1008 19:08:00.504976  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.505000  585386 main.go:141] libmachine: (old-k8s-version-256554) Reserved static IP address: 192.168.39.90
	I1008 19:08:00.505021  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | skip adding static IP to network mk-old-k8s-version-256554 - found existing host DHCP lease matching {name: "old-k8s-version-256554", mac: "52:54:00:9d:97:b8", ip: "192.168.39.90"}
	I1008 19:08:00.505061  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Getting to WaitForSSH function...
	I1008 19:08:00.505088  585386 main.go:141] libmachine: (old-k8s-version-256554) Waiting for SSH to be available...
	I1008 19:08:00.507469  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.507835  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.507866  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.508009  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH client type: external
	I1008 19:08:00.508038  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa (-rw-------)
	I1008 19:08:00.508066  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:00.508082  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | About to run SSH command:
	I1008 19:08:00.508095  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | exit 0
	I1008 19:08:00.635012  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:00.635385  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetConfigRaw
	I1008 19:08:00.636074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:00.639005  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.639421  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.639816  585386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/config.json ...
	I1008 19:08:00.640049  585386 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:00.640074  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:00.640307  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.643040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643382  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.643411  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.643545  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.643743  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.643943  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.644080  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.644238  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.644435  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.644446  585386 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:00.758888  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:00.758923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759221  585386 buildroot.go:166] provisioning hostname "old-k8s-version-256554"
	I1008 19:08:00.759253  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:00.759428  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.763040  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763417  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.763456  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.763657  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.763860  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764041  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.764199  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.764386  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.764613  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.764626  585386 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-256554 && echo "old-k8s-version-256554" | sudo tee /etc/hostname
	I1008 19:08:00.898623  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-256554
	
	I1008 19:08:00.898661  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:00.901717  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902104  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:00.902136  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:00.902299  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:00.902590  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902788  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:00.902930  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:00.903146  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:00.903392  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:00.903442  585386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-256554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-256554/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-256554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:01.026257  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:01.026283  585386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:01.026356  585386 buildroot.go:174] setting up certificates
	I1008 19:08:01.026370  585386 provision.go:84] configureAuth start
	I1008 19:08:01.026382  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetMachineName
	I1008 19:08:01.026671  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.029396  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029760  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.029798  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.029897  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.032429  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032785  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.032814  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.032918  585386 provision.go:143] copyHostCerts
	I1008 19:08:01.032990  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:01.033003  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:01.033064  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:01.033212  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:01.033225  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:01.033256  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:01.033340  585386 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:01.033350  585386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:01.033376  585386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:01.033440  585386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-256554 san=[127.0.0.1 192.168.39.90 localhost minikube old-k8s-version-256554]
	I1008 19:08:01.208342  585386 provision.go:177] copyRemoteCerts
	I1008 19:08:01.208416  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:01.208450  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.211173  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211555  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.211586  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.211753  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.211940  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.212059  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.212178  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.295696  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:01.319904  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 19:08:01.342458  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 19:08:01.365245  585386 provision.go:87] duration metric: took 338.862707ms to configureAuth
	I1008 19:08:01.365273  585386 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:01.365444  585386 config.go:182] Loaded profile config "old-k8s-version-256554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1008 19:08:01.365528  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.368074  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368363  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.368394  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.368525  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.368721  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.368923  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.369077  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.369243  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.369404  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.369419  585386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:01.596670  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:01.596706  585386 machine.go:96] duration metric: took 956.642025ms to provisionDockerMachine
	I1008 19:08:01.596724  585386 start.go:293] postStartSetup for "old-k8s-version-256554" (driver="kvm2")
	I1008 19:08:01.596740  585386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:01.596785  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.597190  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:01.597231  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.600302  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600660  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.600691  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.600957  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.601136  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.601272  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.601447  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.834691  584371 start.go:364] duration metric: took 54.903126319s to acquireMachinesLock for "no-preload-966632"
	I1008 19:08:01.834745  584371 start.go:96] Skipping create...Using existing machine configuration
	I1008 19:08:01.834753  584371 fix.go:54] fixHost starting: 
	I1008 19:08:01.835158  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:01.835200  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:01.854850  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1008 19:08:01.855220  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:01.855740  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:01.855770  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:01.856201  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:01.856428  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:01.856587  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:01.857921  584371 fix.go:112] recreateIfNeeded on no-preload-966632: state=Stopped err=<nil>
	I1008 19:08:01.857943  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	W1008 19:08:01.858110  584371 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 19:08:01.859994  584371 out.go:177] * Restarting existing kvm2 VM for "no-preload-966632" ...
	I1008 19:08:01.684581  585386 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:01.688719  585386 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:01.688745  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:01.688810  585386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:01.688905  585386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:01.689016  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:01.699424  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:01.722056  585386 start.go:296] duration metric: took 125.3184ms for postStartSetup
	I1008 19:08:01.722094  585386 fix.go:56] duration metric: took 20.699269758s for fixHost
	I1008 19:08:01.722121  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.724795  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725166  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.725197  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.725368  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.725586  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725754  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.725915  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.726067  585386 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:01.726265  585386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1008 19:08:01.726276  585386 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:01.834507  585386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414481.784600585
	
	I1008 19:08:01.834528  585386 fix.go:216] guest clock: 1728414481.784600585
	I1008 19:08:01.834536  585386 fix.go:229] Guest: 2024-10-08 19:08:01.784600585 +0000 UTC Remote: 2024-10-08 19:08:01.722099716 +0000 UTC m=+220.104411267 (delta=62.500869ms)
	I1008 19:08:01.834587  585386 fix.go:200] guest clock delta is within tolerance: 62.500869ms
	I1008 19:08:01.834594  585386 start.go:83] releasing machines lock for "old-k8s-version-256554", held for 20.811816039s
	I1008 19:08:01.834626  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.834911  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:01.837576  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.837889  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.837908  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.838071  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838543  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838707  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .DriverName
	I1008 19:08:01.838801  585386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:01.838841  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.838923  585386 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:01.838948  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHHostname
	I1008 19:08:01.841477  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841826  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.841854  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.841874  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842064  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842247  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842297  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:01.842362  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:01.842421  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842539  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHPort
	I1008 19:08:01.842615  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.842682  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHKeyPath
	I1008 19:08:01.842821  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetSSHUsername
	I1008 19:08:01.842972  585386 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/old-k8s-version-256554/id_rsa Username:docker}
	I1008 19:08:01.928595  585386 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:01.955722  585386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:02.101635  585386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:02.108125  585386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:02.108200  585386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:02.124670  585386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:02.124697  585386 start.go:495] detecting cgroup driver to use...
	I1008 19:08:02.124764  585386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:02.139787  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:02.153256  585386 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:02.153324  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:02.170514  585386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:02.189147  585386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:02.306831  585386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:02.473018  585386 docker.go:233] disabling docker service ...
	I1008 19:08:02.473097  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:02.487835  585386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:02.501103  585386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:02.642263  585386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:02.775105  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:02.799476  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:02.818440  585386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1008 19:08:02.818512  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.829526  585386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:02.829601  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.840727  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.855124  585386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:02.866409  585386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:02.879398  585386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:02.889439  585386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:02.889501  585386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:02.904092  585386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:02.914775  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:03.057036  585386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:03.160532  585386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:03.160616  585386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:03.166823  585386 start.go:563] Will wait 60s for crictl version
	I1008 19:08:03.166904  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:03.170870  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:03.209472  585386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:03.209588  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.238152  585386 ssh_runner.go:195] Run: crio --version
	I1008 19:08:03.269608  585386 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1008 19:08:01.861355  584371 main.go:141] libmachine: (no-preload-966632) Calling .Start
	I1008 19:08:01.861539  584371 main.go:141] libmachine: (no-preload-966632) Ensuring networks are active...
	I1008 19:08:01.862455  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network default is active
	I1008 19:08:01.862878  584371 main.go:141] libmachine: (no-preload-966632) Ensuring network mk-no-preload-966632 is active
	I1008 19:08:01.863368  584371 main.go:141] libmachine: (no-preload-966632) Getting domain xml...
	I1008 19:08:01.864106  584371 main.go:141] libmachine: (no-preload-966632) Creating domain...
	I1008 19:08:03.179854  584371 main.go:141] libmachine: (no-preload-966632) Waiting to get IP...
	I1008 19:08:03.180838  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.181232  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.181301  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.181206  586496 retry.go:31] will retry after 229.567854ms: waiting for machine to come up
	I1008 19:08:03.412710  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.413201  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.413225  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.413170  586496 retry.go:31] will retry after 361.675143ms: waiting for machine to come up
	I1008 19:08:03.776466  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:03.777140  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:03.777184  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:03.777047  586496 retry.go:31] will retry after 323.194852ms: waiting for machine to come up
	I1008 19:08:04.101865  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.102357  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.102388  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.102310  586496 retry.go:31] will retry after 484.995282ms: waiting for machine to come up
	I1008 19:08:02.698935  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:05.195930  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:02.447582  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:04.450889  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:03.270765  585386 main.go:141] libmachine: (old-k8s-version-256554) Calling .GetIP
	I1008 19:08:03.273775  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274194  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:97:b8", ip: ""} in network mk-old-k8s-version-256554: {Iface:virbr3 ExpiryTime:2024-10-08 20:07:52 +0000 UTC Type:0 Mac:52:54:00:9d:97:b8 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:old-k8s-version-256554 Clientid:01:52:54:00:9d:97:b8}
	I1008 19:08:03.274224  585386 main.go:141] libmachine: (old-k8s-version-256554) DBG | domain old-k8s-version-256554 has defined IP address 192.168.39.90 and MAC address 52:54:00:9d:97:b8 in network mk-old-k8s-version-256554
	I1008 19:08:03.274471  585386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:03.278736  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:03.291051  585386 kubeadm.go:883] updating cluster {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:03.291156  585386 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 19:08:03.291208  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:03.337081  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:03.337154  585386 ssh_runner.go:195] Run: which lz4
	I1008 19:08:03.341356  585386 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 19:08:03.345611  585386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 19:08:03.345643  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1008 19:08:04.956738  585386 crio.go:462] duration metric: took 1.615417109s to copy over tarball
	I1008 19:08:04.956828  585386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 19:08:04.589063  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:04.589752  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:04.589780  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:04.589706  586496 retry.go:31] will retry after 543.703113ms: waiting for machine to come up
	I1008 19:08:05.135522  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.135997  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.136023  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.135944  586496 retry.go:31] will retry after 617.479763ms: waiting for machine to come up
	I1008 19:08:05.754978  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:05.755541  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:05.755568  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:05.755486  586496 retry.go:31] will retry after 849.017716ms: waiting for machine to come up
	I1008 19:08:06.606621  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:06.607072  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:06.607105  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:06.607023  586496 retry.go:31] will retry after 1.133489837s: waiting for machine to come up
	I1008 19:08:07.742713  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:07.743299  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:07.743329  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:07.743252  586496 retry.go:31] will retry after 1.797316795s: waiting for machine to come up
	I1008 19:08:07.196317  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.698409  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.698443  585096 pod_ready.go:82] duration metric: took 12.009772792s for pod "coredns-7c65d6cfc9-tkg8j" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.698475  585096 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.708991  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.709015  585096 pod_ready.go:82] duration metric: took 10.527401ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.709028  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714343  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:07.714369  585096 pod_ready.go:82] duration metric: took 5.331417ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:07.714383  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.118973  585096 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:06.948829  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:09.448376  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:07.871094  585386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.914223117s)
	I1008 19:08:07.871140  585386 crio.go:469] duration metric: took 2.914368245s to extract the tarball
	I1008 19:08:07.871151  585386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 19:08:07.914183  585386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:07.955397  585386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1008 19:08:07.955422  585386 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:07.955511  585386 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.955535  585386 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.955545  585386 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.955594  585386 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1008 19:08:07.955531  585386 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:07.955672  585386 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.955573  585386 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.955506  585386 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957283  585386 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:07.957298  585386 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:07.957297  585386 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:07.957310  585386 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:07.957284  585386 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1008 19:08:07.957360  585386 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:07.957368  585386 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:07.957448  585386 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.149737  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.150108  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.150401  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.159064  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.161526  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.165666  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.177276  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1008 19:08:08.286657  585386 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1008 19:08:08.286698  585386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.286744  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334667  585386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1008 19:08:08.334725  585386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.334775  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.334869  585386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1008 19:08:08.334911  585386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.334953  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356236  585386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1008 19:08:08.356287  585386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.356290  585386 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1008 19:08:08.356323  585386 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.356334  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.356364  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361038  585386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1008 19:08:08.361074  585386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.361114  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.361111  585386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1008 19:08:08.361145  585386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1008 19:08:08.361180  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.361211  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.361239  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.361187  585386 ssh_runner.go:195] Run: which crictl
	I1008 19:08:08.364913  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.365017  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.479836  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.479867  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.479964  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.480002  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.480098  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.480155  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.480235  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.607740  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.649998  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1008 19:08:08.650122  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1008 19:08:08.650164  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.650205  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1008 19:08:08.650275  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1008 19:08:08.650352  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1008 19:08:08.713481  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1008 19:08:08.809958  585386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:08.826816  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1008 19:08:08.826978  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1008 19:08:08.827037  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1008 19:08:08.827104  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1008 19:08:08.827228  585386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1008 19:08:08.827252  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1008 19:08:08.838721  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1008 19:08:08.990613  585386 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1008 19:08:08.990713  585386 cache_images.go:92] duration metric: took 1.03526949s to LoadCachedImages
	W1008 19:08:08.990795  585386 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1008 19:08:08.990812  585386 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.20.0 crio true true} ...
	I1008 19:08:08.990964  585386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-256554 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:08.991062  585386 ssh_runner.go:195] Run: crio config
	I1008 19:08:09.037168  585386 cni.go:84] Creating CNI manager for ""
	I1008 19:08:09.037192  585386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:09.037210  585386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:09.037232  585386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-256554 NodeName:old-k8s-version-256554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1008 19:08:09.037488  585386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-256554"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:09.037579  585386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1008 19:08:09.048095  585386 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:09.048171  585386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:09.058043  585386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1008 19:08:09.076678  585386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:09.093620  585386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1008 19:08:09.115974  585386 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:09.120489  585386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:09.133593  585386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:09.269669  585386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:09.287513  585386 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554 for IP: 192.168.39.90
	I1008 19:08:09.287554  585386 certs.go:194] generating shared ca certs ...
	I1008 19:08:09.287576  585386 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.287781  585386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:09.287876  585386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:09.287892  585386 certs.go:256] generating profile certs ...
	I1008 19:08:09.288010  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/client.key
	I1008 19:08:09.288088  585386 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key.cd4ca3ea
	I1008 19:08:09.288147  585386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key
	I1008 19:08:09.288320  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:09.288369  585386 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:09.288384  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:09.288417  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:09.288456  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:09.288497  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:09.288557  585386 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:09.289514  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:09.345517  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:09.376497  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:09.419213  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:09.446447  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 19:08:09.478034  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 19:08:09.512407  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:09.549096  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/old-k8s-version-256554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 19:08:09.576690  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:09.604780  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:09.633039  585386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:09.659106  585386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:09.676447  585386 ssh_runner.go:195] Run: openssl version
	I1008 19:08:09.682548  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:09.693601  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698266  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.698366  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:09.706151  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:09.717046  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:09.727625  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732226  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.732289  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:09.737920  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:09.748830  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:09.759838  585386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764499  585386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.764620  585386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:09.770413  585386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:09.782357  585386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:09.788406  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:09.794929  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:09.800825  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:09.807265  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:09.813601  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:09.819327  585386 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:09.825233  585386 kubeadm.go:392] StartCluster: {Name:old-k8s-version-256554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-256554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:09.825351  585386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:09.825399  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:09.866771  585386 cri.go:89] found id: ""
	I1008 19:08:09.866857  585386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:09.880437  585386 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:09.880464  585386 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:09.880523  585386 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:09.890688  585386 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:09.892027  585386 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-256554" does not appear in /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:09.893006  585386 kubeconfig.go:62] /home/jenkins/minikube-integration/19774-529764/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-256554" cluster setting kubeconfig missing "old-k8s-version-256554" context setting]
	I1008 19:08:09.894360  585386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:09.980740  585386 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:09.992829  585386 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.90
	I1008 19:08:09.992876  585386 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:09.992890  585386 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:09.992939  585386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:10.028982  585386 cri.go:89] found id: ""
	I1008 19:08:10.029066  585386 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:10.045348  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:10.055102  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:10.055126  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:10.055170  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:10.063839  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:10.063892  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:10.073391  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:10.082189  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:10.082255  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:10.091590  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.101569  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:10.101624  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:10.112811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:10.125314  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:10.125397  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:10.135176  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:10.145288  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:10.278386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.228932  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.493058  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:11.610545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:09.541879  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:09.542340  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:09.542372  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:09.542288  586496 retry.go:31] will retry after 2.238590286s: waiting for machine to come up
	I1008 19:08:11.783440  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:11.783909  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:11.783945  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:11.783858  586496 retry.go:31] will retry after 2.226110801s: waiting for machine to come up
	I1008 19:08:14.012103  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:14.012538  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:14.012561  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:14.012493  586496 retry.go:31] will retry after 2.298206633s: waiting for machine to come up
	I1008 19:08:10.849833  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.849856  585096 pod_ready.go:82] duration metric: took 3.13546554s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.849868  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858341  585096 pod_ready.go:93] pod "kube-proxy-lwggr" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.858367  585096 pod_ready.go:82] duration metric: took 8.492572ms for pod "kube-proxy-lwggr" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.858379  585096 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865890  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:10.865909  585096 pod_ready.go:82] duration metric: took 7.521945ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:10.865918  585096 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:12.873861  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:15.372408  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.450482  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:13.948331  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:11.705690  585386 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:11.705797  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.205975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:12.705946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.206919  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:13.706046  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.206346  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:14.706150  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.206767  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:15.706755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.206798  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:16.313868  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:16.314460  584371 main.go:141] libmachine: (no-preload-966632) DBG | unable to find current IP address of domain no-preload-966632 in network mk-no-preload-966632
	I1008 19:08:16.314484  584371 main.go:141] libmachine: (no-preload-966632) DBG | I1008 19:08:16.314424  586496 retry.go:31] will retry after 3.672085858s: waiting for machine to come up
	I1008 19:08:17.872689  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.372637  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.448090  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:18.947580  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:20.948804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:16.706645  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.206130  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:17.705915  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.206201  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:18.706161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.206106  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.706708  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.206878  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:20.706895  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:21.205938  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:19.989014  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989556  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has current primary IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.989576  584371 main.go:141] libmachine: (no-preload-966632) Found IP for machine: 192.168.61.141
	I1008 19:08:19.989589  584371 main.go:141] libmachine: (no-preload-966632) Reserving static IP address...
	I1008 19:08:19.990000  584371 main.go:141] libmachine: (no-preload-966632) Reserved static IP address: 192.168.61.141
	I1008 19:08:19.990036  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.990048  584371 main.go:141] libmachine: (no-preload-966632) Waiting for SSH to be available...
	I1008 19:08:19.990068  584371 main.go:141] libmachine: (no-preload-966632) DBG | skip adding static IP to network mk-no-preload-966632 - found existing host DHCP lease matching {name: "no-preload-966632", mac: "52:54:00:6a:3f:c2", ip: "192.168.61.141"}
	I1008 19:08:19.990076  584371 main.go:141] libmachine: (no-preload-966632) DBG | Getting to WaitForSSH function...
	I1008 19:08:19.992644  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.992970  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:19.993010  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:19.993081  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH client type: external
	I1008 19:08:19.993104  584371 main.go:141] libmachine: (no-preload-966632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa (-rw-------)
	I1008 19:08:19.993136  584371 main.go:141] libmachine: (no-preload-966632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 19:08:19.993152  584371 main.go:141] libmachine: (no-preload-966632) DBG | About to run SSH command:
	I1008 19:08:19.993174  584371 main.go:141] libmachine: (no-preload-966632) DBG | exit 0
	I1008 19:08:20.118205  584371 main.go:141] libmachine: (no-preload-966632) DBG | SSH cmd err, output: <nil>: 
	I1008 19:08:20.118616  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetConfigRaw
	I1008 19:08:20.119326  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.122203  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122678  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.122708  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.122926  584371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/config.json ...
	I1008 19:08:20.123144  584371 machine.go:93] provisionDockerMachine start ...
	I1008 19:08:20.123164  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:20.123360  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.125759  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126083  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.126108  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.126265  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.126442  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.126793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.126980  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.127189  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.127201  584371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 19:08:20.234458  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 19:08:20.234491  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.234781  584371 buildroot.go:166] provisioning hostname "no-preload-966632"
	I1008 19:08:20.234811  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.235044  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.237673  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.237993  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.238016  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.238221  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.238418  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238612  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.238806  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.238981  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.239176  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.239203  584371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-966632 && echo "no-preload-966632" | sudo tee /etc/hostname
	I1008 19:08:20.360621  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-966632
	
	I1008 19:08:20.360649  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.363600  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.363909  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.363947  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.364166  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.364297  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364426  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.364510  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.364630  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.364855  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.364881  584371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-966632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-966632/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-966632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 19:08:20.483101  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 19:08:20.483131  584371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19774-529764/.minikube CaCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19774-529764/.minikube}
	I1008 19:08:20.483149  584371 buildroot.go:174] setting up certificates
	I1008 19:08:20.483161  584371 provision.go:84] configureAuth start
	I1008 19:08:20.483171  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetMachineName
	I1008 19:08:20.483429  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:20.486467  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.486838  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.486871  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.487037  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.489207  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489531  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.489557  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.489655  584371 provision.go:143] copyHostCerts
	I1008 19:08:20.489726  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem, removing ...
	I1008 19:08:20.489737  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem
	I1008 19:08:20.489803  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/ca.pem (1082 bytes)
	I1008 19:08:20.489927  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem, removing ...
	I1008 19:08:20.489939  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem
	I1008 19:08:20.489987  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/cert.pem (1123 bytes)
	I1008 19:08:20.490072  584371 exec_runner.go:144] found /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem, removing ...
	I1008 19:08:20.490083  584371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem
	I1008 19:08:20.490110  584371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19774-529764/.minikube/key.pem (1675 bytes)
	I1008 19:08:20.490231  584371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem org=jenkins.no-preload-966632 san=[127.0.0.1 192.168.61.141 localhost minikube no-preload-966632]
	I1008 19:08:20.618050  584371 provision.go:177] copyRemoteCerts
	I1008 19:08:20.618117  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 19:08:20.618149  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.621118  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621458  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.621485  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.621670  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.621875  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.622056  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.622224  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:20.704439  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 19:08:20.730441  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 19:08:20.755072  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 19:08:20.777513  584371 provision.go:87] duration metric: took 294.340685ms to configureAuth
	I1008 19:08:20.777550  584371 buildroot.go:189] setting minikube options for container-runtime
	I1008 19:08:20.777774  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:08:20.777873  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:20.780540  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.780956  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:20.780995  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:20.781185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:20.781423  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781615  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:20.781793  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:20.781989  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:20.782179  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:20.782203  584371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 19:08:21.003896  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 19:08:21.003925  584371 machine.go:96] duration metric: took 880.766243ms to provisionDockerMachine
	I1008 19:08:21.003940  584371 start.go:293] postStartSetup for "no-preload-966632" (driver="kvm2")
	I1008 19:08:21.003955  584371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 19:08:21.003974  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.004286  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 19:08:21.004312  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.007138  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007472  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.007500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.007610  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.007820  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.007991  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.008163  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.093075  584371 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 19:08:21.097048  584371 info.go:137] Remote host: Buildroot 2023.02.9
	I1008 19:08:21.097076  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/addons for local assets ...
	I1008 19:08:21.097160  584371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19774-529764/.minikube/files for local assets ...
	I1008 19:08:21.097254  584371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem -> 5370132.pem in /etc/ssl/certs
	I1008 19:08:21.097370  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 19:08:21.106698  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:21.130484  584371 start.go:296] duration metric: took 126.530716ms for postStartSetup
	I1008 19:08:21.130526  584371 fix.go:56] duration metric: took 19.295774496s for fixHost
	I1008 19:08:21.130550  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.133361  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.133717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.133744  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.134048  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.134269  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134525  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.134710  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.134888  584371 main.go:141] libmachine: Using SSH client type: native
	I1008 19:08:21.135119  584371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.141 22 <nil> <nil>}
	I1008 19:08:21.135135  584371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 19:08:21.242740  584371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728414501.194174379
	
	I1008 19:08:21.242765  584371 fix.go:216] guest clock: 1728414501.194174379
	I1008 19:08:21.242776  584371 fix.go:229] Guest: 2024-10-08 19:08:21.194174379 +0000 UTC Remote: 2024-10-08 19:08:21.130530022 +0000 UTC m=+356.786912807 (delta=63.644357ms)
	I1008 19:08:21.242823  584371 fix.go:200] guest clock delta is within tolerance: 63.644357ms
	I1008 19:08:21.242835  584371 start.go:83] releasing machines lock for "no-preload-966632", held for 19.408108613s
	I1008 19:08:21.242857  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.243112  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:21.245967  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246378  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.246409  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.246731  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247314  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247500  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:21.247588  584371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 19:08:21.247640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.247706  584371 ssh_runner.go:195] Run: cat /version.json
	I1008 19:08:21.247731  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:21.250191  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250228  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250665  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250694  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250717  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:21.250729  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:21.250789  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250948  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:21.250962  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251129  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:21.251314  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.251334  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:21.251462  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:21.353600  584371 ssh_runner.go:195] Run: systemctl --version
	I1008 19:08:21.360031  584371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 19:08:21.502001  584371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 19:08:21.508846  584371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 19:08:21.508938  584371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 19:08:21.524597  584371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 19:08:21.524626  584371 start.go:495] detecting cgroup driver to use...
	I1008 19:08:21.524699  584371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 19:08:21.541500  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 19:08:21.553886  584371 docker.go:217] disabling cri-docker service (if available) ...
	I1008 19:08:21.553943  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 19:08:21.567027  584371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 19:08:21.579965  584371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 19:08:21.692823  584371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 19:08:21.844393  584371 docker.go:233] disabling docker service ...
	I1008 19:08:21.844461  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 19:08:21.860471  584371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 19:08:21.873229  584371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 19:08:22.003106  584371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 19:08:22.129301  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 19:08:22.143314  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 19:08:22.161423  584371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 19:08:22.161494  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.171355  584371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 19:08:22.171429  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.180962  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.190212  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.199737  584371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 19:08:22.209488  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.219051  584371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.235430  584371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 19:08:22.245007  584371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 19:08:22.253705  584371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 19:08:22.253748  584371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 19:08:22.265343  584371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 19:08:22.275245  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:22.380960  584371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 19:08:22.471004  584371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 19:08:22.471067  584371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 19:08:22.475520  584371 start.go:563] Will wait 60s for crictl version
	I1008 19:08:22.475598  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.479271  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 19:08:22.523709  584371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 19:08:22.523787  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.551307  584371 ssh_runner.go:195] Run: crio --version
	I1008 19:08:22.579271  584371 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1008 19:08:22.580608  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetIP
	I1008 19:08:22.583417  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583783  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:22.583825  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:22.583991  584371 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1008 19:08:22.587937  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:22.600324  584371 kubeadm.go:883] updating cluster {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 19:08:22.600465  584371 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1008 19:08:22.600506  584371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 19:08:22.641111  584371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1008 19:08:22.641139  584371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 19:08:22.641194  584371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.641224  584371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.641284  584371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.641307  584371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.641377  584371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.641407  584371 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 19:08:22.641742  584371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642057  584371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.642568  584371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.642576  584371 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 19:08:22.642669  584371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.642792  584371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.642876  584371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.642894  584371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.643310  584371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.799972  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.811504  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.815340  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.815659  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.817303  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.858380  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.864688  584371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1008 19:08:22.864727  584371 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:22.864762  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.877332  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1008 19:08:22.934971  584371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1008 19:08:22.935035  584371 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:22.935085  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945549  584371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1008 19:08:22.945594  584371 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:22.945644  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945645  584371 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1008 19:08:22.945683  584371 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:22.945685  584371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1008 19:08:22.945730  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.945733  584371 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:22.945796  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981887  584371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1008 19:08:22.982012  584371 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:22.982059  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:22.981954  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.082208  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.082210  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.082304  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.082411  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.082430  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.082543  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.178344  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.196633  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.196665  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.196733  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.209763  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.209830  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 19:08:23.310142  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1008 19:08:23.317659  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1008 19:08:23.317731  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1008 19:08:23.327221  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1008 19:08:23.331490  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1008 19:08:23.346298  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 19:08:23.346412  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.435656  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 19:08:23.435679  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1008 19:08:23.435783  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:23.435788  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:23.441591  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 19:08:23.441673  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:23.441696  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 19:08:23.441782  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 19:08:23.441814  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:23.441856  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:23.441901  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1008 19:08:23.441918  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.441947  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1008 19:08:23.445597  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1008 19:08:23.445630  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1008 19:08:23.449022  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1008 19:08:23.450009  584371 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:22.373452  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:24.872600  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:23.448074  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:25.449287  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:21.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.206387  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:22.706184  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.206209  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:23.706506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.206243  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:24.705934  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.206452  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.706879  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:26.205890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:25.950280  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.508431356s)
	I1008 19:08:25.950340  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.508402491s)
	I1008 19:08:25.950344  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1008 19:08:25.950357  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1008 19:08:25.950545  584371 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.50050623s)
	I1008 19:08:25.950600  584371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1008 19:08:25.950611  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.508516442s)
	I1008 19:08:25.950637  584371 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:25.950648  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1008 19:08:25.950680  584371 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:25.950688  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:08:25.950727  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1008 19:08:29.225357  584371 ssh_runner.go:235] Completed: which crictl: (3.274648192s)
	I1008 19:08:29.225514  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:29.225532  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.27477814s)
	I1008 19:08:29.225561  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1008 19:08:29.225593  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:29.225627  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1008 19:08:27.373617  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.374173  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:27.948313  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:29.948750  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:26.706463  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.206022  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:27.706309  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:28.706262  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:29.706634  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.206866  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.706260  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:31.206440  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:30.696201  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.470655089s)
	I1008 19:08:30.696255  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.470604601s)
	I1008 19:08:30.696284  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1008 19:08:30.696296  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:30.696317  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.696365  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1008 19:08:30.740520  584371 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:32.685896  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.989500601s)
	I1008 19:08:32.685941  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1008 19:08:32.685971  584371 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.685971  584371 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.945412846s)
	I1008 19:08:32.686046  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1008 19:08:32.686045  584371 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 19:08:32.686186  584371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:31.872718  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:33.873665  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:32.447765  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:34.948257  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:31.706134  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.206573  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:32.706526  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.206443  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:33.705949  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.206701  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.705972  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.206685  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:35.706682  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:36.206449  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:34.663874  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.977781248s)
	I1008 19:08:34.663914  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1008 19:08:34.663939  584371 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:34.663942  584371 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.977724244s)
	I1008 19:08:34.663973  584371 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1008 19:08:34.663991  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1008 19:08:36.833283  584371 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.169263327s)
	I1008 19:08:36.833320  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1008 19:08:36.833353  584371 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:36.833417  584371 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 19:08:37.485901  584371 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19774-529764/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 19:08:37.485954  584371 cache_images.go:123] Successfully loaded all cached images
	I1008 19:08:37.485961  584371 cache_images.go:92] duration metric: took 14.844810749s to LoadCachedImages
	I1008 19:08:37.485973  584371 kubeadm.go:934] updating node { 192.168.61.141 8443 v1.31.1 crio true true} ...
	I1008 19:08:37.486084  584371 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-966632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 19:08:37.486149  584371 ssh_runner.go:195] Run: crio config
	I1008 19:08:37.544511  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:37.544535  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:37.544554  584371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 19:08:37.544576  584371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-966632 NodeName:no-preload-966632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 19:08:37.544718  584371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-966632"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 19:08:37.544792  584371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1008 19:08:37.556979  584371 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 19:08:37.557049  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 19:08:37.566249  584371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1008 19:08:37.583303  584371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 19:08:37.599535  584371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1008 19:08:37.616315  584371 ssh_runner.go:195] Run: grep 192.168.61.141	control-plane.minikube.internal$ /etc/hosts
	I1008 19:08:37.620089  584371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 19:08:37.632181  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:37.748647  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:37.765577  584371 certs.go:68] Setting up /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632 for IP: 192.168.61.141
	I1008 19:08:37.765600  584371 certs.go:194] generating shared ca certs ...
	I1008 19:08:37.765619  584371 certs.go:226] acquiring lock for ca certs: {Name:mkbaac740ec3118836d5e2ebf2416bece08e7e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:37.765829  584371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key
	I1008 19:08:37.765890  584371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key
	I1008 19:08:37.765904  584371 certs.go:256] generating profile certs ...
	I1008 19:08:37.766020  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.key
	I1008 19:08:37.766095  584371 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key.a515ed11
	I1008 19:08:37.766143  584371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key
	I1008 19:08:37.766334  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem (1338 bytes)
	W1008 19:08:37.766383  584371 certs.go:480] ignoring /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013_empty.pem, impossibly tiny 0 bytes
	I1008 19:08:37.766398  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca-key.pem (1675 bytes)
	I1008 19:08:37.766430  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/ca.pem (1082 bytes)
	I1008 19:08:37.766467  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/cert.pem (1123 bytes)
	I1008 19:08:37.766501  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/certs/key.pem (1675 bytes)
	I1008 19:08:37.766562  584371 certs.go:484] found cert: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem (1708 bytes)
	I1008 19:08:37.767588  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 19:08:37.804400  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 19:08:37.837466  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 19:08:37.865516  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 19:08:37.894827  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 19:08:37.918668  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 19:08:37.948238  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 19:08:37.974152  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 19:08:37.997284  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/ssl/certs/5370132.pem --> /usr/share/ca-certificates/5370132.pem (1708 bytes)
	I1008 19:08:38.019295  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 19:08:38.043392  584371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19774-529764/.minikube/certs/537013.pem --> /usr/share/ca-certificates/537013.pem (1338 bytes)
	I1008 19:08:38.067971  584371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 19:08:38.084940  584371 ssh_runner.go:195] Run: openssl version
	I1008 19:08:38.090779  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5370132.pem && ln -fs /usr/share/ca-certificates/5370132.pem /etc/ssl/certs/5370132.pem"
	I1008 19:08:38.102715  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107292  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:53 /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.107355  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5370132.pem
	I1008 19:08:38.113456  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5370132.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 19:08:38.123904  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 19:08:38.134337  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138503  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:34 /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.138561  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 19:08:38.143902  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 19:08:38.155393  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/537013.pem && ln -fs /usr/share/ca-certificates/537013.pem /etc/ssl/certs/537013.pem"
	I1008 19:08:38.167107  584371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171433  584371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:53 /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.171480  584371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/537013.pem
	I1008 19:08:38.176968  584371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/537013.pem /etc/ssl/certs/51391683.0"
	I1008 19:08:38.188437  584371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 19:08:38.192733  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 19:08:38.198531  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 19:08:38.204187  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 19:08:38.210522  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 19:08:38.216328  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 19:08:38.222077  584371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 19:08:38.227724  584371 kubeadm.go:392] StartCluster: {Name:no-preload-966632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-966632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 19:08:38.227802  584371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 19:08:38.227882  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.262461  584371 cri.go:89] found id: ""
	I1008 19:08:38.262532  584371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 19:08:38.272591  584371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 19:08:38.272612  584371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 19:08:38.272677  584371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 19:08:38.282621  584371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 19:08:38.283683  584371 kubeconfig.go:125] found "no-preload-966632" server: "https://192.168.61.141:8443"
	I1008 19:08:38.286019  584371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 19:08:38.295315  584371 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.141
	I1008 19:08:38.295344  584371 kubeadm.go:1160] stopping kube-system containers ...
	I1008 19:08:38.295357  584371 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 19:08:38.295400  584371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 19:08:38.329462  584371 cri.go:89] found id: ""
	I1008 19:08:38.329533  584371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 19:08:38.345901  584371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:08:38.354899  584371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:08:38.354920  584371 kubeadm.go:157] found existing configuration files:
	
	I1008 19:08:38.354965  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:08:38.363242  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:08:38.363282  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:08:38.373063  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:08:38.381479  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:08:38.381530  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:08:38.390679  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.400033  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:08:38.400071  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:08:38.409308  584371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:08:38.417842  584371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:08:38.417876  584371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:08:38.427251  584371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:08:38.437010  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:38.562381  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.344247  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:36.372911  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:38.872768  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:37.448043  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:39.956579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:36.706629  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.206776  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:37.706450  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.206782  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:38.706242  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.206263  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.705947  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.206632  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.705920  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:41.206747  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:39.550458  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.619345  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:39.718016  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:08:39.718126  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.218974  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.719108  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:40.741178  584371 api_server.go:72] duration metric: took 1.023163924s to wait for apiserver process to appear ...
	I1008 19:08:40.741210  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:08:40.741235  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:40.741767  584371 api_server.go:269] stopped: https://192.168.61.141:8443/healthz: Get "https://192.168.61.141:8443/healthz": dial tcp 192.168.61.141:8443: connect: connection refused
	I1008 19:08:41.241356  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.787235  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 19:08:43.787284  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 19:08:43.787306  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:43.914606  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:43.914653  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:44.242033  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.247068  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.247097  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:40.873394  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:43.373475  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:42.446900  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:44.447141  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:41.706890  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.206437  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:42.706166  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.206028  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:43.706929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.206161  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.706784  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.206144  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:45.706004  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:46.206537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:44.742212  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:44.756340  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:44.756371  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.241997  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.246343  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.246367  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:45.741898  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:45.749274  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:45.749301  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.241889  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.246127  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.246155  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:46.741694  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:46.746192  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 19:08:46.746219  584371 api_server.go:103] status: https://192.168.61.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 19:08:47.242250  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:08:47.246571  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:08:47.252812  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:08:47.252843  584371 api_server.go:131] duration metric: took 6.511626175s to wait for apiserver health ...
	I1008 19:08:47.252852  584371 cni.go:84] Creating CNI manager for ""
	I1008 19:08:47.252858  584371 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:08:47.254723  584371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:08:47.255933  584371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:08:47.266073  584371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:08:47.284042  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:08:47.293401  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:08:47.293432  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 19:08:47.293439  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 19:08:47.293450  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 19:08:47.293456  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 19:08:47.293464  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:08:47.293469  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 19:08:47.293474  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:08:47.293478  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:08:47.293484  584371 system_pods.go:74] duration metric: took 9.422158ms to wait for pod list to return data ...
	I1008 19:08:47.293493  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:08:47.296923  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:08:47.296947  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:08:47.296960  584371 node_conditions.go:105] duration metric: took 3.462212ms to run NodePressure ...
	I1008 19:08:47.296979  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 19:08:47.562271  584371 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566914  584371 kubeadm.go:739] kubelet initialised
	I1008 19:08:47.566938  584371 kubeadm.go:740] duration metric: took 4.63692ms waiting for restarted kubelet to initialise ...
	I1008 19:08:47.566950  584371 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:47.571271  584371 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.575633  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575659  584371 pod_ready.go:82] duration metric: took 4.364181ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.575671  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.575680  584371 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.579443  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579465  584371 pod_ready.go:82] duration metric: took 3.775248ms for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.579475  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "etcd-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.579483  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.583747  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583775  584371 pod_ready.go:82] duration metric: took 4.277306ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.583785  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-apiserver-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.583797  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:47.687618  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687652  584371 pod_ready.go:82] duration metric: took 103.843425ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:47.687663  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:47.687669  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.087568  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087601  584371 pod_ready.go:82] duration metric: took 399.92202ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.087613  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-proxy-qpnvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.087622  584371 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.487223  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487256  584371 pod_ready.go:82] duration metric: took 399.625038ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.487269  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "kube-scheduler-no-preload-966632" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.487278  584371 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:48.887764  584371 pod_ready.go:98] node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887798  584371 pod_ready.go:82] duration metric: took 400.504473ms for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:08:48.887812  584371 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-966632" hosting pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:48.887821  584371 pod_ready.go:39] duration metric: took 1.320859293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:48.887842  584371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:08:48.901255  584371 ops.go:34] apiserver oom_adj: -16
	I1008 19:08:48.901279  584371 kubeadm.go:597] duration metric: took 10.628659432s to restartPrimaryControlPlane
	I1008 19:08:48.901290  584371 kubeadm.go:394] duration metric: took 10.673572592s to StartCluster
	I1008 19:08:48.901313  584371 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.901397  584371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:08:48.904024  584371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:08:48.904361  584371 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:08:48.904455  584371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:08:48.904549  584371 addons.go:69] Setting storage-provisioner=true in profile "no-preload-966632"
	I1008 19:08:48.904565  584371 addons.go:69] Setting default-storageclass=true in profile "no-preload-966632"
	I1008 19:08:48.904594  584371 addons.go:234] Setting addon storage-provisioner=true in "no-preload-966632"
	W1008 19:08:48.904603  584371 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:08:48.904603  584371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-966632"
	I1008 19:08:48.904574  584371 addons.go:69] Setting metrics-server=true in profile "no-preload-966632"
	I1008 19:08:48.904646  584371 addons.go:234] Setting addon metrics-server=true in "no-preload-966632"
	I1008 19:08:48.904651  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.904652  584371 config.go:182] Loaded profile config "no-preload-966632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1008 19:08:48.904670  584371 addons.go:243] addon metrics-server should already be in state true
	I1008 19:08:48.904705  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.905079  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905116  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905133  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905151  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.905159  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.905205  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.906774  584371 out.go:177] * Verifying Kubernetes components...
	I1008 19:08:48.908138  584371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:08:48.942865  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I1008 19:08:48.943612  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.944201  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.944232  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.944667  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.944748  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1008 19:08:48.945485  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.945526  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.945763  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.946464  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.946484  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.946530  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I1008 19:08:48.946935  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.947052  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.947649  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.947693  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.948006  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.948027  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.948379  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.948602  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.951770  584371 addons.go:234] Setting addon default-storageclass=true in "no-preload-966632"
	W1008 19:08:48.951788  584371 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:08:48.951819  584371 host.go:66] Checking if "no-preload-966632" exists ...
	I1008 19:08:48.952055  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.952095  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.962422  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1008 19:08:48.962931  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.963509  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.963532  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.963908  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.964117  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.965879  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.967812  584371 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:08:48.967853  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1008 19:08:48.967817  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1008 19:08:48.968376  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968436  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.968885  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.968906  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.968964  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:08:48.968986  584371 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:08:48.969010  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.969290  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.969449  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.969472  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.969910  584371 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:08:48.969941  584371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:08:48.970187  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.970430  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.972100  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972523  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.972544  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.972677  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.972735  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.973016  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.973191  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.973323  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.974390  584371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:08:48.975651  584371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:48.975670  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:08:48.975686  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:48.978500  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.978855  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:48.978876  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:48.979079  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:48.979474  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:48.979640  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:48.979766  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:48.994846  584371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1008 19:08:48.995180  584371 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:08:48.995592  584371 main.go:141] libmachine: Using API Version  1
	I1008 19:08:48.995607  584371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:08:48.995976  584371 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:08:48.996173  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetState
	I1008 19:08:48.998270  584371 main.go:141] libmachine: (no-preload-966632) Calling .DriverName
	I1008 19:08:48.998549  584371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:48.998568  584371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:08:48.998591  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHHostname
	I1008 19:08:49.000647  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.000908  584371 main.go:141] libmachine: (no-preload-966632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:3f:c2", ip: ""} in network mk-no-preload-966632: {Iface:virbr4 ExpiryTime:2024-10-08 20:08:13 +0000 UTC Type:0 Mac:52:54:00:6a:3f:c2 Iaid: IPaddr:192.168.61.141 Prefix:24 Hostname:no-preload-966632 Clientid:01:52:54:00:6a:3f:c2}
	I1008 19:08:49.000924  584371 main.go:141] libmachine: (no-preload-966632) DBG | domain no-preload-966632 has defined IP address 192.168.61.141 and MAC address 52:54:00:6a:3f:c2 in network mk-no-preload-966632
	I1008 19:08:49.001078  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHPort
	I1008 19:08:49.001185  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHKeyPath
	I1008 19:08:49.001282  584371 main.go:141] libmachine: (no-preload-966632) Calling .GetSSHUsername
	I1008 19:08:49.001358  584371 sshutil.go:53] new ssh client: &{IP:192.168.61.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/no-preload-966632/id_rsa Username:docker}
	I1008 19:08:49.118217  584371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:08:49.138077  584371 node_ready.go:35] waiting up to 6m0s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:49.217300  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:08:49.241237  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:08:49.365395  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:08:49.365420  584371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:08:45.873500  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.373215  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:49.403596  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:08:49.403625  584371 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:08:49.438480  584371 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:49.438540  584371 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:08:49.464366  584371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:08:50.474783  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.233506833s)
	I1008 19:08:50.474850  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474862  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.474914  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.257567473s)
	I1008 19:08:50.474955  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.474964  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475191  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475206  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475215  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475221  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475280  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475289  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475297  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.475303  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.475310  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475441  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475454  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.475582  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.475596  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.475628  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482003  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.482031  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.482315  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.482351  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.482372  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.512902  584371 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.048483922s)
	I1008 19:08:50.512957  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.512980  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513241  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513257  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513261  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513299  584371 main.go:141] libmachine: Making call to close driver server
	I1008 19:08:50.513307  584371 main.go:141] libmachine: (no-preload-966632) Calling .Close
	I1008 19:08:50.513534  584371 main.go:141] libmachine: (no-preload-966632) DBG | Closing plugin on server side
	I1008 19:08:50.513552  584371 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:08:50.513561  584371 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:08:50.513577  584371 addons.go:475] Verifying addon metrics-server=true in "no-preload-966632"
	I1008 19:08:50.515302  584371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:08:46.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:48.448332  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:50.449239  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:46.706613  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.206660  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:47.705860  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.206331  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:48.706529  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.205870  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:49.705875  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.206468  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.706089  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:51.206644  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:50.516457  584371 addons.go:510] duration metric: took 1.612011936s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:08:51.141437  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:53.142166  584371 node_ready.go:53] node "no-preload-966632" has status "Ready":"False"
	I1008 19:08:54.141208  584371 node_ready.go:49] node "no-preload-966632" has status "Ready":"True"
	I1008 19:08:54.141238  584371 node_ready.go:38] duration metric: took 5.003121669s for node "no-preload-966632" to be "Ready" ...
	I1008 19:08:54.141251  584371 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:08:54.146685  584371 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151059  584371 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:54.151078  584371 pod_ready.go:82] duration metric: took 4.369406ms for pod "coredns-7c65d6cfc9-r8qft" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:54.151086  584371 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:50.872416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:53.372230  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:52.947461  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:54.950183  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:51.706603  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.205859  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:52.706989  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.206430  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:53.706793  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.206575  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:54.706833  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.206506  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:55.706025  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.206755  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:56.157153  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.157458  584371 pod_ready.go:103] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.658595  584371 pod_ready.go:93] pod "etcd-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.658617  584371 pod_ready.go:82] duration metric: took 4.507524391s for pod "etcd-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.658627  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663785  584371 pod_ready.go:93] pod "kube-apiserver-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.663811  584371 pod_ready.go:82] duration metric: took 5.176586ms for pod "kube-apiserver-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.663823  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668310  584371 pod_ready.go:93] pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.668342  584371 pod_ready.go:82] duration metric: took 4.509914ms for pod "kube-controller-manager-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.668356  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672380  584371 pod_ready.go:93] pod "kube-proxy-qpnvm" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.672397  584371 pod_ready.go:82] duration metric: took 4.034104ms for pod "kube-proxy-qpnvm" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.672405  584371 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676499  584371 pod_ready.go:93] pod "kube-scheduler-no-preload-966632" in "kube-system" namespace has status "Ready":"True"
	I1008 19:08:58.676517  584371 pod_ready.go:82] duration metric: took 4.106343ms for pod "kube-scheduler-no-preload-966632" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:58.676527  584371 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	I1008 19:08:55.873069  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:58.372424  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:57.448182  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:59.947932  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:08:56.706662  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.205960  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:57.706537  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.206300  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:58.705981  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.206079  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:08:59.705964  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.206810  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.706140  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:01.205997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:00.682583  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.682958  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:00.872650  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.872783  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:05.371825  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:02.447340  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:04.447504  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:01.706311  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.206527  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:02.706259  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.206609  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:03.706462  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.206423  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.706765  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.206671  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:05.706721  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:06.206350  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:04.683354  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.183362  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.183636  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:07.872083  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:09.874058  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.947502  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:08.948054  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:06.706880  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.206562  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:07.705997  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.206071  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:08.706438  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.206857  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:09.706670  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.206766  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:10.706174  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.206117  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:11.683833  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.188267  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:12.371967  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:14.372404  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.448009  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:13.948106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:15.948926  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:11.706366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:11.706474  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:11.743165  585386 cri.go:89] found id: ""
	I1008 19:09:11.743195  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.743206  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:11.743212  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:11.743263  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:11.776037  585386 cri.go:89] found id: ""
	I1008 19:09:11.776068  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.776077  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:11.776083  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:11.776132  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:11.809363  585386 cri.go:89] found id: ""
	I1008 19:09:11.809397  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.809410  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:11.809418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:11.809485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:11.841504  585386 cri.go:89] found id: ""
	I1008 19:09:11.841540  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.841552  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:11.841560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:11.841623  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:11.875440  585386 cri.go:89] found id: ""
	I1008 19:09:11.875470  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.875482  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:11.875489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:11.875550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:11.915765  585386 cri.go:89] found id: ""
	I1008 19:09:11.915797  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.915809  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:11.915817  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:11.915905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:11.948106  585386 cri.go:89] found id: ""
	I1008 19:09:11.948135  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.948145  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:11.948158  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:11.948221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:11.984387  585386 cri.go:89] found id: ""
	I1008 19:09:11.984420  585386 logs.go:282] 0 containers: []
	W1008 19:09:11.984431  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:11.984443  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:11.984473  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:12.106478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:12.106509  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:12.106527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:12.178067  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:12.178103  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:12.216402  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:12.216433  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:12.267186  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:12.267220  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:14.781503  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:14.794808  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:14.794872  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:14.827501  585386 cri.go:89] found id: ""
	I1008 19:09:14.827534  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.827544  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:14.827550  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:14.827615  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:14.862634  585386 cri.go:89] found id: ""
	I1008 19:09:14.862667  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.862680  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:14.862697  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:14.862773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:14.901444  585386 cri.go:89] found id: ""
	I1008 19:09:14.901471  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.901480  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:14.901485  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:14.901537  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:14.937807  585386 cri.go:89] found id: ""
	I1008 19:09:14.937841  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.937854  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:14.937862  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:14.937932  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:14.974538  585386 cri.go:89] found id: ""
	I1008 19:09:14.974566  585386 logs.go:282] 0 containers: []
	W1008 19:09:14.974579  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:14.974587  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:14.974649  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:15.016426  585386 cri.go:89] found id: ""
	I1008 19:09:15.016462  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.016474  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:15.016487  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:15.016548  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:15.054834  585386 cri.go:89] found id: ""
	I1008 19:09:15.054865  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.054874  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:15.054881  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:15.054934  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:15.100425  585386 cri.go:89] found id: ""
	I1008 19:09:15.100455  585386 logs.go:282] 0 containers: []
	W1008 19:09:15.100464  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:15.100473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:15.100485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:15.152394  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:15.152431  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:15.167732  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:15.167767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:15.244649  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:15.244674  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:15.244688  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:15.328373  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:15.328424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:16.683453  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.184073  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:16.873511  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:19.372353  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:18.446864  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:20.449087  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:17.881929  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:17.895273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:17.895332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:17.931485  585386 cri.go:89] found id: ""
	I1008 19:09:17.931512  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.931521  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:17.931527  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:17.931587  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:17.966615  585386 cri.go:89] found id: ""
	I1008 19:09:17.966645  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.966656  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:17.966664  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:17.966727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:17.999728  585386 cri.go:89] found id: ""
	I1008 19:09:17.999758  585386 logs.go:282] 0 containers: []
	W1008 19:09:17.999768  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:17.999778  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:17.999850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:18.035508  585386 cri.go:89] found id: ""
	I1008 19:09:18.035540  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.035553  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:18.035561  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:18.035624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:18.071001  585386 cri.go:89] found id: ""
	I1008 19:09:18.071034  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.071044  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:18.071050  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:18.071103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:18.104399  585386 cri.go:89] found id: ""
	I1008 19:09:18.104428  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.104437  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:18.104444  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:18.104496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:18.140410  585386 cri.go:89] found id: ""
	I1008 19:09:18.140443  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.140456  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:18.140465  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:18.140528  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:18.178573  585386 cri.go:89] found id: ""
	I1008 19:09:18.178608  585386 logs.go:282] 0 containers: []
	W1008 19:09:18.178619  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:18.178630  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:18.178646  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:18.229137  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:18.229171  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:18.242828  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:18.242864  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:18.311332  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:18.311352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:18.311363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:18.390287  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:18.390323  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:20.928195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:20.941409  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:20.941468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:20.978156  585386 cri.go:89] found id: ""
	I1008 19:09:20.978186  585386 logs.go:282] 0 containers: []
	W1008 19:09:20.978197  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:20.978205  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:20.978269  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:21.011375  585386 cri.go:89] found id: ""
	I1008 19:09:21.011404  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.011416  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:21.011424  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:21.011487  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:21.048409  585386 cri.go:89] found id: ""
	I1008 19:09:21.048437  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.048446  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:21.048452  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:21.048563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:21.090491  585386 cri.go:89] found id: ""
	I1008 19:09:21.090527  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.090559  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:21.090568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:21.090639  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:21.133553  585386 cri.go:89] found id: ""
	I1008 19:09:21.133581  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.133590  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:21.133596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:21.133651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:21.172814  585386 cri.go:89] found id: ""
	I1008 19:09:21.172848  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.172861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:21.172869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:21.172938  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:21.221452  585386 cri.go:89] found id: ""
	I1008 19:09:21.221480  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.221489  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:21.221496  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:21.221559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:21.255350  585386 cri.go:89] found id: ""
	I1008 19:09:21.255380  585386 logs.go:282] 0 containers: []
	W1008 19:09:21.255390  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:21.255399  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:21.255413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:21.306621  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:21.306661  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:21.320562  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:21.320602  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:21.397043  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:21.397072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:21.397087  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:21.481548  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:21.481581  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:21.184209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.683535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:21.373869  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:23.872606  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:22.947224  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.947961  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:24.022521  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:24.035695  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:24.035758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:24.068625  585386 cri.go:89] found id: ""
	I1008 19:09:24.068649  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.068660  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:24.068667  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:24.068734  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:24.101753  585386 cri.go:89] found id: ""
	I1008 19:09:24.101796  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.101809  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:24.101818  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:24.101881  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:24.132682  585386 cri.go:89] found id: ""
	I1008 19:09:24.132714  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.132723  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:24.132730  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:24.132794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:24.168438  585386 cri.go:89] found id: ""
	I1008 19:09:24.168471  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.168480  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:24.168486  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:24.168562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:24.205491  585386 cri.go:89] found id: ""
	I1008 19:09:24.205523  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.205543  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:24.205549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:24.205624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:24.239355  585386 cri.go:89] found id: ""
	I1008 19:09:24.239388  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.239402  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:24.239410  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:24.239468  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:24.270598  585386 cri.go:89] found id: ""
	I1008 19:09:24.270629  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.270638  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:24.270644  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:24.270694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:24.303808  585386 cri.go:89] found id: ""
	I1008 19:09:24.303842  585386 logs.go:282] 0 containers: []
	W1008 19:09:24.303852  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:24.303862  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:24.303874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:24.340961  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:24.340999  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:24.392311  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:24.392347  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:24.405895  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:24.405924  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:24.476099  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:24.476127  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:24.476145  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:26.183587  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.184349  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:26.373049  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:28.873435  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.447254  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:29.447470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:27.057772  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:27.073331  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:27.073425  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:27.112158  585386 cri.go:89] found id: ""
	I1008 19:09:27.112192  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.112204  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:27.112213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:27.112279  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:27.155096  585386 cri.go:89] found id: ""
	I1008 19:09:27.155133  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.155147  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:27.155154  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:27.155218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:27.212958  585386 cri.go:89] found id: ""
	I1008 19:09:27.212992  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.213003  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:27.213010  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:27.213066  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:27.246859  585386 cri.go:89] found id: ""
	I1008 19:09:27.246886  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.246896  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:27.246902  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:27.246964  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:27.281199  585386 cri.go:89] found id: ""
	I1008 19:09:27.281235  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.281248  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:27.281256  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:27.281332  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:27.315205  585386 cri.go:89] found id: ""
	I1008 19:09:27.315239  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.315249  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:27.315255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:27.315320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:27.347590  585386 cri.go:89] found id: ""
	I1008 19:09:27.347627  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.347640  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:27.347648  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:27.347708  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:27.384515  585386 cri.go:89] found id: ""
	I1008 19:09:27.384544  585386 logs.go:282] 0 containers: []
	W1008 19:09:27.384555  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:27.384566  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:27.384582  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:27.439547  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:27.439595  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:27.453383  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:27.453406  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:27.521874  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:27.521902  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:27.521916  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:27.600423  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:27.600469  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.144906  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:30.158290  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:30.158388  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:30.192938  585386 cri.go:89] found id: ""
	I1008 19:09:30.192994  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.193007  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:30.193015  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:30.193083  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:30.226999  585386 cri.go:89] found id: ""
	I1008 19:09:30.227036  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.227049  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:30.227057  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:30.227129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:30.262985  585386 cri.go:89] found id: ""
	I1008 19:09:30.263017  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.263028  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:30.263036  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:30.263098  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:30.294528  585386 cri.go:89] found id: ""
	I1008 19:09:30.294571  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.294584  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:30.294591  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:30.294654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:30.328909  585386 cri.go:89] found id: ""
	I1008 19:09:30.328941  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.328952  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:30.328961  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:30.329029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:30.370816  585386 cri.go:89] found id: ""
	I1008 19:09:30.370851  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.370861  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:30.370869  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:30.370935  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:30.403589  585386 cri.go:89] found id: ""
	I1008 19:09:30.403623  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.403635  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:30.403643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:30.403707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:30.434695  585386 cri.go:89] found id: ""
	I1008 19:09:30.434729  585386 logs.go:282] 0 containers: []
	W1008 19:09:30.434742  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:30.434753  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:30.434767  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:30.473767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:30.473799  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:30.525738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:30.525771  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:30.538863  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:30.538891  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:30.610106  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:30.610132  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:30.610149  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:30.683953  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.183412  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.371635  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.373244  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:31.448173  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.458099  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.947741  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:33.195038  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:33.207643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:33.207704  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:33.239651  585386 cri.go:89] found id: ""
	I1008 19:09:33.239681  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.239691  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:33.239698  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:33.239759  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:33.270699  585386 cri.go:89] found id: ""
	I1008 19:09:33.270728  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.270737  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:33.270743  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:33.270803  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:33.302314  585386 cri.go:89] found id: ""
	I1008 19:09:33.302355  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.302365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:33.302371  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:33.302421  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:33.339005  585386 cri.go:89] found id: ""
	I1008 19:09:33.339034  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.339043  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:33.339049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:33.339102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:33.372924  585386 cri.go:89] found id: ""
	I1008 19:09:33.372954  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.372965  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:33.372973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:33.373031  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:33.406228  585386 cri.go:89] found id: ""
	I1008 19:09:33.406300  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.406313  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:33.406336  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:33.406403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:33.440548  585386 cri.go:89] found id: ""
	I1008 19:09:33.440582  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.440596  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:33.440604  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:33.440675  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:33.478529  585386 cri.go:89] found id: ""
	I1008 19:09:33.478558  585386 logs.go:282] 0 containers: []
	W1008 19:09:33.478567  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:33.478576  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:33.478597  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:33.529995  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:33.530029  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:33.544030  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:33.544056  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:33.611370  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:33.611403  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:33.611424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:33.694847  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:33.694880  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.236034  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:36.248995  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:36.249062  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:36.281690  585386 cri.go:89] found id: ""
	I1008 19:09:36.281727  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.281744  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:36.281753  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:36.281819  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:36.314937  585386 cri.go:89] found id: ""
	I1008 19:09:36.314971  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.314983  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:36.314991  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:36.315060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:36.347457  585386 cri.go:89] found id: ""
	I1008 19:09:36.347486  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.347497  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:36.347505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:36.347562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:36.384246  585386 cri.go:89] found id: ""
	I1008 19:09:36.384268  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.384278  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:36.384286  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:36.384350  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:36.419593  585386 cri.go:89] found id: ""
	I1008 19:09:36.419621  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.419630  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:36.419637  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:36.419698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:36.466251  585386 cri.go:89] found id: ""
	I1008 19:09:36.466279  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.466288  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:36.466294  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:36.466369  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:36.505568  585386 cri.go:89] found id: ""
	I1008 19:09:36.505591  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.505602  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:36.505610  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:36.505674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:36.543071  585386 cri.go:89] found id: ""
	I1008 19:09:36.543097  585386 logs.go:282] 0 containers: []
	W1008 19:09:36.543107  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:36.543116  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:36.543128  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:36.617974  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:36.618002  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:36.618020  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:35.184447  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.682974  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:35.872226  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:37.872308  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:39.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:38.447494  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:40.947078  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:36.702739  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:36.702772  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:36.741182  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:36.741222  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:36.795319  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:36.795360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.309946  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:39.323263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:39.323340  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:39.358245  585386 cri.go:89] found id: ""
	I1008 19:09:39.358277  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.358286  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:39.358293  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:39.358362  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:39.395224  585386 cri.go:89] found id: ""
	I1008 19:09:39.395255  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.395266  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:39.395274  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:39.395337  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:39.431000  585386 cri.go:89] found id: ""
	I1008 19:09:39.431028  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.431037  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:39.431043  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:39.431110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:39.463534  585386 cri.go:89] found id: ""
	I1008 19:09:39.463558  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.463566  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:39.463571  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:39.463622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:39.499849  585386 cri.go:89] found id: ""
	I1008 19:09:39.499882  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.499894  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:39.499903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:39.499973  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:39.533652  585386 cri.go:89] found id: ""
	I1008 19:09:39.533685  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.533696  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:39.533705  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:39.533760  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:39.567848  585386 cri.go:89] found id: ""
	I1008 19:09:39.567885  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.567927  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:39.567940  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:39.568019  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:39.600964  585386 cri.go:89] found id: ""
	I1008 19:09:39.600990  585386 logs.go:282] 0 containers: []
	W1008 19:09:39.600999  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:39.601008  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:39.601022  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:39.653102  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:39.653150  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:39.667640  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:39.667684  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:39.745368  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:39.745399  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:39.745416  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:39.824803  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:39.824844  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:39.686907  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.183930  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.184443  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.372207  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:44.872360  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.947712  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:45.447011  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:42.369048  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:42.384072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:42.384130  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:42.422717  585386 cri.go:89] found id: ""
	I1008 19:09:42.422744  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.422753  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:42.422759  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:42.422824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:42.458423  585386 cri.go:89] found id: ""
	I1008 19:09:42.458451  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.458460  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:42.458465  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:42.458522  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:42.490295  585386 cri.go:89] found id: ""
	I1008 19:09:42.490338  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.490351  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:42.490359  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:42.490419  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:42.526557  585386 cri.go:89] found id: ""
	I1008 19:09:42.526595  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.526607  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:42.526616  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:42.526688  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:42.565426  585386 cri.go:89] found id: ""
	I1008 19:09:42.565459  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.565477  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:42.565483  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:42.565562  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:42.598947  585386 cri.go:89] found id: ""
	I1008 19:09:42.598983  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.598995  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:42.599001  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:42.599072  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:42.631890  585386 cri.go:89] found id: ""
	I1008 19:09:42.631923  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.631934  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:42.631946  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:42.632010  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:42.669290  585386 cri.go:89] found id: ""
	I1008 19:09:42.669323  585386 logs.go:282] 0 containers: []
	W1008 19:09:42.669336  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:42.669348  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:42.669365  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:42.722942  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:42.722980  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:42.736848  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:42.736873  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:42.810314  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:42.810352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:42.810366  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:42.888350  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:42.888384  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.428190  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:45.442488  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:45.442555  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:45.475141  585386 cri.go:89] found id: ""
	I1008 19:09:45.475165  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.475173  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:45.475179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:45.475243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:45.507838  585386 cri.go:89] found id: ""
	I1008 19:09:45.507865  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.507876  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:45.507883  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:45.507944  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:45.541549  585386 cri.go:89] found id: ""
	I1008 19:09:45.541608  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.541621  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:45.541628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:45.541684  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:45.575361  585386 cri.go:89] found id: ""
	I1008 19:09:45.575394  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.575406  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:45.575414  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:45.575484  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:45.607892  585386 cri.go:89] found id: ""
	I1008 19:09:45.607924  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.607936  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:45.607944  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:45.608009  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:45.640636  585386 cri.go:89] found id: ""
	I1008 19:09:45.640663  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.640683  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:45.640692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:45.640747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:45.672483  585386 cri.go:89] found id: ""
	I1008 19:09:45.672515  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.672526  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:45.672535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:45.672607  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:45.706812  585386 cri.go:89] found id: ""
	I1008 19:09:45.706845  585386 logs.go:282] 0 containers: []
	W1008 19:09:45.706857  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:45.706870  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:45.706892  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:45.742425  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:45.742460  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:45.800517  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:45.800556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:45.814982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:45.815015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:45.886634  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:45.886659  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:45.886675  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:46.682572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.683539  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.372618  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.373137  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:47.448127  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:49.947787  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:48.472451  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:48.485427  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:48.485509  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:48.525126  585386 cri.go:89] found id: ""
	I1008 19:09:48.525153  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.525161  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:48.525168  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:48.525228  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:48.559189  585386 cri.go:89] found id: ""
	I1008 19:09:48.559236  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.559249  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:48.559257  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:48.559322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:48.597909  585386 cri.go:89] found id: ""
	I1008 19:09:48.597946  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.597959  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:48.597966  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:48.598029  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:48.631077  585386 cri.go:89] found id: ""
	I1008 19:09:48.631117  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.631130  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:48.631138  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:48.631205  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:48.664493  585386 cri.go:89] found id: ""
	I1008 19:09:48.664526  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.664541  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:48.664549  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:48.664610  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:48.700638  585386 cri.go:89] found id: ""
	I1008 19:09:48.700668  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.700680  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:48.700688  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:48.700747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:48.736765  585386 cri.go:89] found id: ""
	I1008 19:09:48.736790  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.736800  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:48.736807  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:48.736862  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:48.771413  585386 cri.go:89] found id: ""
	I1008 19:09:48.771449  585386 logs.go:282] 0 containers: []
	W1008 19:09:48.771461  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:48.771473  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:48.771491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:48.824938  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:48.824976  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:48.838490  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:48.838524  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:48.907401  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:48.907430  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:48.907448  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:48.984521  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:48.984556  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.526460  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:51.541033  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:51.541094  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:51.579570  585386 cri.go:89] found id: ""
	I1008 19:09:51.579605  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.579619  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:51.579635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:51.579694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:51.613000  585386 cri.go:89] found id: ""
	I1008 19:09:51.613034  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.613047  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:51.613055  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:51.613120  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:51.646059  585386 cri.go:89] found id: ""
	I1008 19:09:51.646102  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.646123  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:51.646131  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:51.646203  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:50.683784  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:53.183034  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.873417  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.373414  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.948470  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:54.447675  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:51.677648  585386 cri.go:89] found id: ""
	I1008 19:09:51.677672  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.677680  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:51.677687  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:51.677748  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:51.711784  585386 cri.go:89] found id: ""
	I1008 19:09:51.711812  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.711821  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:51.711827  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:51.711877  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:51.745938  585386 cri.go:89] found id: ""
	I1008 19:09:51.745969  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.745979  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:51.745986  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:51.746048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:51.779358  585386 cri.go:89] found id: ""
	I1008 19:09:51.779398  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.779409  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:51.779417  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:51.779483  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:51.816098  585386 cri.go:89] found id: ""
	I1008 19:09:51.816134  585386 logs.go:282] 0 containers: []
	W1008 19:09:51.816147  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:51.816159  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:51.816184  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:51.856716  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:51.856749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:51.910203  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:51.910244  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:51.924455  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:51.924483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:51.994930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:51.994954  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:51.994970  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:54.573987  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:54.587263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:54.587338  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:54.621127  585386 cri.go:89] found id: ""
	I1008 19:09:54.621159  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.621171  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:54.621179  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:54.621231  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:54.660133  585386 cri.go:89] found id: ""
	I1008 19:09:54.660165  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.660178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:54.660185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:54.660241  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:54.693054  585386 cri.go:89] found id: ""
	I1008 19:09:54.693086  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.693097  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:54.693106  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:54.693172  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:54.730554  585386 cri.go:89] found id: ""
	I1008 19:09:54.730583  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.730593  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:54.730600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:54.730666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:54.764919  585386 cri.go:89] found id: ""
	I1008 19:09:54.764951  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.764963  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:54.764972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:54.765047  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:54.797828  585386 cri.go:89] found id: ""
	I1008 19:09:54.797859  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.797869  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:54.797875  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:54.797941  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:54.831276  585386 cri.go:89] found id: ""
	I1008 19:09:54.831305  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.831316  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:54.831323  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:54.831393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:54.870914  585386 cri.go:89] found id: ""
	I1008 19:09:54.870945  585386 logs.go:282] 0 containers: []
	W1008 19:09:54.870956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:54.870967  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:54.870983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:54.941556  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:54.941588  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:54.941605  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:55.022736  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:55.022775  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:09:55.062530  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:55.062565  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:55.111948  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:55.111982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:55.184058  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.683581  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.872213  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.872323  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:56.447790  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:58.947901  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.948561  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:09:57.625743  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:09:57.640454  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:09:57.640544  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:09:57.679564  585386 cri.go:89] found id: ""
	I1008 19:09:57.679590  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.679601  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:09:57.679609  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:09:57.679673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:09:57.713629  585386 cri.go:89] found id: ""
	I1008 19:09:57.713663  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.713673  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:09:57.713679  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:09:57.713739  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:09:57.749502  585386 cri.go:89] found id: ""
	I1008 19:09:57.749534  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.749546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:09:57.749555  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:09:57.749634  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:09:57.791679  585386 cri.go:89] found id: ""
	I1008 19:09:57.791706  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.791717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:09:57.791726  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:09:57.791794  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:09:57.826406  585386 cri.go:89] found id: ""
	I1008 19:09:57.826437  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.826447  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:09:57.826453  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:09:57.826511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:09:57.859189  585386 cri.go:89] found id: ""
	I1008 19:09:57.859221  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.859232  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:09:57.859241  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:09:57.859306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:09:57.892733  585386 cri.go:89] found id: ""
	I1008 19:09:57.892765  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.892774  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:09:57.892782  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:09:57.892847  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:09:57.925119  585386 cri.go:89] found id: ""
	I1008 19:09:57.925151  585386 logs.go:282] 0 containers: []
	W1008 19:09:57.925161  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:09:57.925170  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:09:57.925186  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:09:57.979814  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:09:57.979848  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:09:57.994544  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:09:57.994574  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:09:58.064397  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:09:58.064424  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:09:58.064439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:09:58.140104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:09:58.140141  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:00.686429  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:00.700481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:00.700556  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:00.734609  585386 cri.go:89] found id: ""
	I1008 19:10:00.734640  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.734648  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:00.734654  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:00.734707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:00.767173  585386 cri.go:89] found id: ""
	I1008 19:10:00.767198  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.767207  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:00.767215  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:00.767277  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:00.805416  585386 cri.go:89] found id: ""
	I1008 19:10:00.805449  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.805462  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:00.805481  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:00.805550  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:00.838673  585386 cri.go:89] found id: ""
	I1008 19:10:00.838698  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.838707  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:00.838714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:00.838776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:00.877241  585386 cri.go:89] found id: ""
	I1008 19:10:00.877261  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.877269  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:00.877274  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:00.877334  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:00.910692  585386 cri.go:89] found id: ""
	I1008 19:10:00.910726  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.910738  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:00.910747  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:00.910809  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:00.947312  585386 cri.go:89] found id: ""
	I1008 19:10:00.947346  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.947359  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:00.947366  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:00.947439  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:00.978434  585386 cri.go:89] found id: ""
	I1008 19:10:00.978458  585386 logs.go:282] 0 containers: []
	W1008 19:10:00.978466  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:00.978475  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:00.978488  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:01.017764  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:01.017797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:01.068597  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:01.068632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:01.083060  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:01.083090  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:01.152452  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:01.152480  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:01.152501  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:00.182341  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.183137  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:04.186590  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:00.872469  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:02.872708  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.372543  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.447536  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:05.947676  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:03.754642  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:03.769783  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:03.769844  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:03.809299  585386 cri.go:89] found id: ""
	I1008 19:10:03.809327  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.809338  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:03.809346  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:03.809414  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:03.842863  585386 cri.go:89] found id: ""
	I1008 19:10:03.842898  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.842911  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:03.842919  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:03.842985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:03.878251  585386 cri.go:89] found id: ""
	I1008 19:10:03.878287  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.878298  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:03.878306  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:03.878390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:03.916238  585386 cri.go:89] found id: ""
	I1008 19:10:03.916266  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.916274  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:03.916280  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:03.916339  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:03.949266  585386 cri.go:89] found id: ""
	I1008 19:10:03.949293  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.949302  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:03.949308  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:03.949366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:03.984568  585386 cri.go:89] found id: ""
	I1008 19:10:03.984605  585386 logs.go:282] 0 containers: []
	W1008 19:10:03.984614  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:03.984621  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:03.984682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:04.027098  585386 cri.go:89] found id: ""
	I1008 19:10:04.027140  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.027153  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:04.027161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:04.027230  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:04.061286  585386 cri.go:89] found id: ""
	I1008 19:10:04.061324  585386 logs.go:282] 0 containers: []
	W1008 19:10:04.061337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:04.061349  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:04.061364  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:04.113420  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:04.113459  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:04.127783  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:04.127811  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:04.200667  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:04.200688  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:04.200700  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:04.278296  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:04.278355  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:06.683572  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.183605  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.373804  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.872253  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:07.947764  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:09.948705  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:06.816994  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:06.831184  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:06.831251  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:06.873966  585386 cri.go:89] found id: ""
	I1008 19:10:06.873994  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.874002  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:06.874008  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:06.874071  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:06.928740  585386 cri.go:89] found id: ""
	I1008 19:10:06.928776  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.928788  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:06.928796  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:06.928860  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:06.975567  585386 cri.go:89] found id: ""
	I1008 19:10:06.975600  585386 logs.go:282] 0 containers: []
	W1008 19:10:06.975618  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:06.975628  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:06.975694  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:07.018146  585386 cri.go:89] found id: ""
	I1008 19:10:07.018178  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.018188  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:07.018195  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:07.018260  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:07.052772  585386 cri.go:89] found id: ""
	I1008 19:10:07.052803  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.052815  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:07.052822  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:07.052889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:07.088171  585386 cri.go:89] found id: ""
	I1008 19:10:07.088203  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.088215  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:07.088223  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:07.088290  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:07.121562  585386 cri.go:89] found id: ""
	I1008 19:10:07.121595  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.121605  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:07.121612  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:07.121666  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:07.155670  585386 cri.go:89] found id: ""
	I1008 19:10:07.155701  585386 logs.go:282] 0 containers: []
	W1008 19:10:07.155711  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:07.155722  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:07.155736  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:07.232751  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:07.232797  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:07.272230  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:07.272270  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:07.325686  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:07.325726  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:07.340287  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:07.340317  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:07.420333  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:09.921520  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:09.937870  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:09.937946  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:09.976114  585386 cri.go:89] found id: ""
	I1008 19:10:09.976141  585386 logs.go:282] 0 containers: []
	W1008 19:10:09.976150  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:09.976157  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:09.976211  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:10.010472  585386 cri.go:89] found id: ""
	I1008 19:10:10.010527  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.010540  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:10.010558  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:10.010626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:10.045114  585386 cri.go:89] found id: ""
	I1008 19:10:10.045151  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.045165  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:10.045173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:10.045245  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:10.081038  585386 cri.go:89] found id: ""
	I1008 19:10:10.081078  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.081091  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:10.081100  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:10.081166  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:10.116211  585386 cri.go:89] found id: ""
	I1008 19:10:10.116247  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.116257  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:10.116263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:10.116320  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:10.152046  585386 cri.go:89] found id: ""
	I1008 19:10:10.152083  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.152099  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:10.152108  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:10.152167  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:10.190661  585386 cri.go:89] found id: ""
	I1008 19:10:10.190692  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.190704  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:10.190712  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:10.190773  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:10.227025  585386 cri.go:89] found id: ""
	I1008 19:10:10.227060  585386 logs.go:282] 0 containers: []
	W1008 19:10:10.227082  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:10.227100  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:10.227123  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:10.266241  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:10.266281  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:10.316593  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:10.316639  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:10.330804  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:10.330843  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:10.409481  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:10.409512  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:10.409531  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:11.184118  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:13.184173  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.372084  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.373845  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.447832  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:14.948882  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:12.987533  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:13.002214  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:13.002299  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:13.044150  585386 cri.go:89] found id: ""
	I1008 19:10:13.044184  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.044195  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:13.044201  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:13.044252  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:13.078539  585386 cri.go:89] found id: ""
	I1008 19:10:13.078579  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.078591  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:13.078599  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:13.078676  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:13.111611  585386 cri.go:89] found id: ""
	I1008 19:10:13.111649  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.111663  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:13.111671  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:13.111742  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:13.145212  585386 cri.go:89] found id: ""
	I1008 19:10:13.145244  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.145253  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:13.145259  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:13.145322  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:13.180764  585386 cri.go:89] found id: ""
	I1008 19:10:13.180792  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.180801  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:13.180810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:13.180874  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:13.221979  585386 cri.go:89] found id: ""
	I1008 19:10:13.222010  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.222021  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:13.222029  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:13.222097  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:13.258146  585386 cri.go:89] found id: ""
	I1008 19:10:13.258185  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.258198  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:13.258206  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:13.258267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:13.293006  585386 cri.go:89] found id: ""
	I1008 19:10:13.293045  585386 logs.go:282] 0 containers: []
	W1008 19:10:13.293056  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:13.293068  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:13.293086  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:13.312508  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:13.312535  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:13.406087  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:13.406109  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:13.406126  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:13.486583  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:13.486635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:13.528778  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:13.528808  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.079606  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:16.093060  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:16.093139  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:16.130160  585386 cri.go:89] found id: ""
	I1008 19:10:16.130192  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.130205  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:16.130213  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:16.130273  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:16.164347  585386 cri.go:89] found id: ""
	I1008 19:10:16.164383  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.164396  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:16.164404  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:16.164469  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:16.201568  585386 cri.go:89] found id: ""
	I1008 19:10:16.201615  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.201625  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:16.201635  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:16.201705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:16.239945  585386 cri.go:89] found id: ""
	I1008 19:10:16.239976  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.239985  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:16.239992  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:16.240048  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:16.271720  585386 cri.go:89] found id: ""
	I1008 19:10:16.271753  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.271765  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:16.271773  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:16.271845  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:16.303803  585386 cri.go:89] found id: ""
	I1008 19:10:16.303835  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.303847  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:16.303855  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:16.303917  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:16.335364  585386 cri.go:89] found id: ""
	I1008 19:10:16.335388  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.335397  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:16.335403  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:16.335466  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:16.369353  585386 cri.go:89] found id: ""
	I1008 19:10:16.369386  585386 logs.go:282] 0 containers: []
	W1008 19:10:16.369399  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:16.369410  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:16.369427  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:16.448243  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:16.448274  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:16.493249  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:16.493280  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:16.543738  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:16.543770  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:16.557728  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:16.557761  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:16.623229  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:15.682883  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.184458  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:16.374416  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:18.872958  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:17.446820  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.448067  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:19.124257  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:19.141115  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:19.141177  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:19.185623  585386 cri.go:89] found id: ""
	I1008 19:10:19.185652  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.185662  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:19.185670  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:19.185731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:19.230338  585386 cri.go:89] found id: ""
	I1008 19:10:19.230372  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.230384  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:19.230392  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:19.230459  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:19.272956  585386 cri.go:89] found id: ""
	I1008 19:10:19.272992  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.273005  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:19.273013  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:19.273102  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:19.305564  585386 cri.go:89] found id: ""
	I1008 19:10:19.305595  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.305604  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:19.305611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:19.305663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:19.336863  585386 cri.go:89] found id: ""
	I1008 19:10:19.336898  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.336907  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:19.336913  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:19.336966  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:19.368380  585386 cri.go:89] found id: ""
	I1008 19:10:19.368413  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.368422  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:19.368429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:19.368493  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:19.406666  585386 cri.go:89] found id: ""
	I1008 19:10:19.406698  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.406710  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:19.406717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:19.406771  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:19.445825  585386 cri.go:89] found id: ""
	I1008 19:10:19.445856  585386 logs.go:282] 0 containers: []
	W1008 19:10:19.445865  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:19.445875  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:19.445890  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:19.499884  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:19.499922  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:19.515547  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:19.515578  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:19.584905  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:19.584930  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:19.584944  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:19.661575  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:19.661614  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:20.686987  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.182360  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.372104  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.872156  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:21.947427  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:23.950711  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:22.201435  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:22.214044  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:22.214103  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:22.246006  585386 cri.go:89] found id: ""
	I1008 19:10:22.246034  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.246043  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:22.246049  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:22.246110  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:22.285635  585386 cri.go:89] found id: ""
	I1008 19:10:22.285676  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.285688  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:22.285696  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:22.285758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:22.318105  585386 cri.go:89] found id: ""
	I1008 19:10:22.318141  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.318153  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:22.318161  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:22.318223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:22.350109  585386 cri.go:89] found id: ""
	I1008 19:10:22.350133  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.350141  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:22.350147  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:22.350197  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:22.383950  585386 cri.go:89] found id: ""
	I1008 19:10:22.383980  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.383992  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:22.384000  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:22.384061  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:22.418765  585386 cri.go:89] found id: ""
	I1008 19:10:22.418794  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.418803  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:22.418809  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:22.418870  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:22.453132  585386 cri.go:89] found id: ""
	I1008 19:10:22.453158  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.453166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:22.453172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:22.453234  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:22.486280  585386 cri.go:89] found id: ""
	I1008 19:10:22.486310  585386 logs.go:282] 0 containers: []
	W1008 19:10:22.486337  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:22.486349  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:22.486363  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:22.566494  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:22.566545  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:22.603604  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:22.603642  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:22.655206  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:22.655243  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:22.668893  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:22.668925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:22.738540  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.239373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:25.252276  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:25.252335  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:25.286416  585386 cri.go:89] found id: ""
	I1008 19:10:25.286448  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.286466  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:25.286472  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:25.286524  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:25.320567  585386 cri.go:89] found id: ""
	I1008 19:10:25.320599  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.320611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:25.320618  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:25.320674  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:25.355703  585386 cri.go:89] found id: ""
	I1008 19:10:25.355735  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.355744  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:25.355752  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:25.355807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:25.387965  585386 cri.go:89] found id: ""
	I1008 19:10:25.387995  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.388006  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:25.388014  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:25.388075  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:25.420524  585386 cri.go:89] found id: ""
	I1008 19:10:25.420558  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.420572  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:25.420579  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:25.420633  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:25.454359  585386 cri.go:89] found id: ""
	I1008 19:10:25.454389  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.454398  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:25.454405  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:25.454453  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:25.486535  585386 cri.go:89] found id: ""
	I1008 19:10:25.486570  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.486581  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:25.486589  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:25.486651  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:25.519599  585386 cri.go:89] found id: ""
	I1008 19:10:25.519635  585386 logs.go:282] 0 containers: []
	W1008 19:10:25.519645  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:25.519655  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:25.519668  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:25.559972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:25.560008  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:25.610064  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:25.610105  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:25.624000  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:25.624039  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:25.700374  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:25.700398  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:25.700415  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:25.183749  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:27.184437  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.372132  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.372299  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:26.447201  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.948117  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.948772  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:28.281813  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:28.295128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:28.295202  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:28.329100  585386 cri.go:89] found id: ""
	I1008 19:10:28.329132  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.329144  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:28.329153  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:28.329218  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:28.360951  585386 cri.go:89] found id: ""
	I1008 19:10:28.360980  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.360992  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:28.360999  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:28.361060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:28.395440  585386 cri.go:89] found id: ""
	I1008 19:10:28.395469  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.395477  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:28.395484  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:28.395547  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:28.430289  585386 cri.go:89] found id: ""
	I1008 19:10:28.430327  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.430339  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:28.430347  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:28.430401  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:28.466841  585386 cri.go:89] found id: ""
	I1008 19:10:28.466867  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.466877  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:28.466885  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:28.466954  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:28.499633  585386 cri.go:89] found id: ""
	I1008 19:10:28.499661  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.499670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:28.499675  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:28.499737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:28.534511  585386 cri.go:89] found id: ""
	I1008 19:10:28.534543  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.534553  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:28.534559  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:28.534609  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:28.565759  585386 cri.go:89] found id: ""
	I1008 19:10:28.565794  585386 logs.go:282] 0 containers: []
	W1008 19:10:28.565804  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:28.565813  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:28.565825  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:28.617927  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:28.617963  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:28.631179  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:28.631212  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:28.697643  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:28.697670  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:28.697685  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:28.776410  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:28.776450  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.317151  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:31.329733  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:31.329829  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:31.361323  585386 cri.go:89] found id: ""
	I1008 19:10:31.361353  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.361364  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:31.361371  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:31.361434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:31.396888  585386 cri.go:89] found id: ""
	I1008 19:10:31.396916  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.396924  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:31.396930  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:31.396983  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:31.428824  585386 cri.go:89] found id: ""
	I1008 19:10:31.428851  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.428859  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:31.428866  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:31.428922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:31.459647  585386 cri.go:89] found id: ""
	I1008 19:10:31.459673  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.459681  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:31.459696  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:31.459758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:31.491398  585386 cri.go:89] found id: ""
	I1008 19:10:31.491425  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.491435  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:31.491443  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:31.491496  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:31.523014  585386 cri.go:89] found id: ""
	I1008 19:10:31.523043  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.523052  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:31.523065  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:31.523129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:31.564372  585386 cri.go:89] found id: ""
	I1008 19:10:31.564406  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.564424  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:31.564432  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:31.564498  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:31.599323  585386 cri.go:89] found id: ""
	I1008 19:10:31.599356  585386 logs.go:282] 0 containers: []
	W1008 19:10:31.599372  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:31.599384  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:31.599399  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:31.612507  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:31.612534  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:10:29.682860  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:31.683468  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:34.184018  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:30.872607  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:32.872784  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.373822  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:33.447573  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:35.447614  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	W1008 19:10:31.681702  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:31.681724  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:31.681738  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:31.759614  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:31.759649  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:31.796412  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:31.796462  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.349164  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:34.361878  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:34.361948  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:34.398716  585386 cri.go:89] found id: ""
	I1008 19:10:34.398746  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.398757  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:34.398765  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:34.398831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:34.431218  585386 cri.go:89] found id: ""
	I1008 19:10:34.431247  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.431256  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:34.431262  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:34.431326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:34.465212  585386 cri.go:89] found id: ""
	I1008 19:10:34.465238  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.465247  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:34.465253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:34.465310  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:34.496754  585386 cri.go:89] found id: ""
	I1008 19:10:34.496781  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.496791  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:34.496796  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:34.496843  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:34.528832  585386 cri.go:89] found id: ""
	I1008 19:10:34.528864  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.528876  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:34.528883  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:34.528945  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:34.563117  585386 cri.go:89] found id: ""
	I1008 19:10:34.563203  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.563219  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:34.563229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:34.563301  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:34.600743  585386 cri.go:89] found id: ""
	I1008 19:10:34.600769  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.600778  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:34.600784  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:34.600834  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:34.632432  585386 cri.go:89] found id: ""
	I1008 19:10:34.632480  585386 logs.go:282] 0 containers: []
	W1008 19:10:34.632492  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:34.632503  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:34.632519  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:34.692144  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:34.692183  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:34.705414  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:34.705440  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:34.768215  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:34.768240  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:34.768256  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:34.847292  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:34.847334  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:36.682470  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:38.683099  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.872270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.872490  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.450208  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:39.947418  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:37.397976  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:37.410693  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:37.410750  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:37.447953  585386 cri.go:89] found id: ""
	I1008 19:10:37.447987  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.447995  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:37.448003  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:37.448056  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:37.480447  585386 cri.go:89] found id: ""
	I1008 19:10:37.480476  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.480484  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:37.480490  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:37.480539  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:37.513079  585386 cri.go:89] found id: ""
	I1008 19:10:37.513113  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.513122  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:37.513128  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:37.513190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:37.549607  585386 cri.go:89] found id: ""
	I1008 19:10:37.549642  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.549655  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:37.549665  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:37.549727  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:37.584506  585386 cri.go:89] found id: ""
	I1008 19:10:37.584538  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.584552  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:37.584560  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:37.584621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:37.619177  585386 cri.go:89] found id: ""
	I1008 19:10:37.619212  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.619224  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:37.619232  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:37.619297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:37.655876  585386 cri.go:89] found id: ""
	I1008 19:10:37.655903  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.655915  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:37.655923  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:37.655979  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:37.693441  585386 cri.go:89] found id: ""
	I1008 19:10:37.693471  585386 logs.go:282] 0 containers: []
	W1008 19:10:37.693483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:37.693500  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:37.693515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:37.776978  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:37.777028  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:37.814263  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:37.814306  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:37.865598  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:37.865633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:37.879054  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:37.879078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:37.948059  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.449049  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:40.461586  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:40.461654  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:40.495475  585386 cri.go:89] found id: ""
	I1008 19:10:40.495514  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.495527  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:40.495536  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:40.495602  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:40.528982  585386 cri.go:89] found id: ""
	I1008 19:10:40.529007  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.529016  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:40.529022  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:40.529074  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:40.561474  585386 cri.go:89] found id: ""
	I1008 19:10:40.561504  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.561515  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:40.561522  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:40.561584  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:40.596399  585386 cri.go:89] found id: ""
	I1008 19:10:40.596437  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.596450  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:40.596458  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:40.596523  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:40.628594  585386 cri.go:89] found id: ""
	I1008 19:10:40.628626  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.628635  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:40.628642  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:40.628705  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:40.659272  585386 cri.go:89] found id: ""
	I1008 19:10:40.659305  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.659318  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:40.659327  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:40.659390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:40.692927  585386 cri.go:89] found id: ""
	I1008 19:10:40.692954  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.692966  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:40.692973  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:40.693035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:40.725908  585386 cri.go:89] found id: ""
	I1008 19:10:40.725940  585386 logs.go:282] 0 containers: []
	W1008 19:10:40.725953  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:40.725972  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:40.725989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:40.778671  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:40.778706  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:40.794386  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:40.794419  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:40.865485  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:40.865510  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:40.865525  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:40.950747  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:40.950783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:40.683975  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.182280  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.372711  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.873233  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:42.446673  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:44.447301  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:43.497771  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:43.510505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:43.510563  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:43.543603  585386 cri.go:89] found id: ""
	I1008 19:10:43.543638  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.543651  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:43.543659  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:43.543731  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:43.576126  585386 cri.go:89] found id: ""
	I1008 19:10:43.576151  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.576160  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:43.576165  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:43.576225  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:43.612875  585386 cri.go:89] found id: ""
	I1008 19:10:43.612902  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.612911  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:43.612917  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:43.612984  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:43.643074  585386 cri.go:89] found id: ""
	I1008 19:10:43.643109  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.643122  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:43.643130  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:43.643198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:43.675078  585386 cri.go:89] found id: ""
	I1008 19:10:43.675103  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.675112  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:43.675119  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:43.675178  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:43.709650  585386 cri.go:89] found id: ""
	I1008 19:10:43.709677  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.709686  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:43.709692  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:43.709753  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:43.742527  585386 cri.go:89] found id: ""
	I1008 19:10:43.742560  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.742573  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:43.742580  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:43.742644  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:43.774512  585386 cri.go:89] found id: ""
	I1008 19:10:43.774546  585386 logs.go:282] 0 containers: []
	W1008 19:10:43.774558  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:43.774570  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:43.774585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:43.855809  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:43.855852  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:43.898404  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:43.898439  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:43.952685  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:43.952716  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:43.967108  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:43.967136  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:44.044975  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.546057  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:46.561545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:46.561603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:46.596104  585386 cri.go:89] found id: ""
	I1008 19:10:46.596141  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.596155  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:46.596167  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:46.596232  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:46.629391  585386 cri.go:89] found id: ""
	I1008 19:10:46.629425  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.629436  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:46.629444  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:46.629511  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:45.188927  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.682373  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:47.371936  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:49.372190  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.447866  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:48.947579  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:46.663023  585386 cri.go:89] found id: ""
	I1008 19:10:46.663050  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.663059  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:46.663068  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:46.663119  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:46.696049  585386 cri.go:89] found id: ""
	I1008 19:10:46.696079  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.696090  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:46.696098  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:46.696159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:46.728467  585386 cri.go:89] found id: ""
	I1008 19:10:46.728497  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.728506  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:46.728511  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:46.728568  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:46.765976  585386 cri.go:89] found id: ""
	I1008 19:10:46.766003  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.766012  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:46.766019  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:46.766070  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:46.801726  585386 cri.go:89] found id: ""
	I1008 19:10:46.801753  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.801762  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:46.801768  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:46.801821  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:46.837556  585386 cri.go:89] found id: ""
	I1008 19:10:46.837595  585386 logs.go:282] 0 containers: []
	W1008 19:10:46.837610  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:46.837621  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:46.837635  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:46.893003  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:46.893034  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:46.906437  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:46.906470  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:46.971323  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:46.971352  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:46.971369  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:47.054813  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:47.054851  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.598091  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:49.613513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:49.613588  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:49.649704  585386 cri.go:89] found id: ""
	I1008 19:10:49.649742  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.649754  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:49.649761  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:49.649828  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:49.683645  585386 cri.go:89] found id: ""
	I1008 19:10:49.683674  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.683686  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:49.683693  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:49.683747  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:49.719792  585386 cri.go:89] found id: ""
	I1008 19:10:49.719820  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.719828  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:49.719834  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:49.719883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:49.756187  585386 cri.go:89] found id: ""
	I1008 19:10:49.756225  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.756237  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:49.756244  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:49.756300  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:49.789748  585386 cri.go:89] found id: ""
	I1008 19:10:49.789776  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.789786  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:49.789794  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:49.789857  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:49.824406  585386 cri.go:89] found id: ""
	I1008 19:10:49.824436  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.824448  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:49.824456  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:49.824590  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:49.860363  585386 cri.go:89] found id: ""
	I1008 19:10:49.860393  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.860405  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:49.860413  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:49.860477  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:49.896907  585386 cri.go:89] found id: ""
	I1008 19:10:49.896944  585386 logs.go:282] 0 containers: []
	W1008 19:10:49.896956  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:49.896968  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:49.896983  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:49.947015  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:49.947043  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:49.959792  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:49.959823  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:50.029955  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:50.029982  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:50.029995  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:50.107944  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:50.107982  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:49.683659  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.182955  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:54.184535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.373113  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.373239  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:51.446974  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:53.447804  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.947655  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:52.649047  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:52.662904  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:52.662980  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:52.697767  585386 cri.go:89] found id: ""
	I1008 19:10:52.697798  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.697809  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:52.697823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:52.697883  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:52.731558  585386 cri.go:89] found id: ""
	I1008 19:10:52.731598  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.731611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:52.731619  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:52.731691  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:52.765869  585386 cri.go:89] found id: ""
	I1008 19:10:52.765899  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.765908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:52.765914  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:52.765967  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:52.803182  585386 cri.go:89] found id: ""
	I1008 19:10:52.803210  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.803221  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:52.803229  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:52.803298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:52.839182  585386 cri.go:89] found id: ""
	I1008 19:10:52.839215  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.839225  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:52.839231  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:52.839306  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:52.871546  585386 cri.go:89] found id: ""
	I1008 19:10:52.871575  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.871584  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:52.871592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:52.871660  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:52.905474  585386 cri.go:89] found id: ""
	I1008 19:10:52.905502  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.905511  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:52.905523  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:52.905574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:52.940008  585386 cri.go:89] found id: ""
	I1008 19:10:52.940040  585386 logs.go:282] 0 containers: []
	W1008 19:10:52.940052  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:52.940064  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:52.940078  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:52.980463  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:52.980498  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:53.030867  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:53.030907  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:53.043384  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:53.043414  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:53.115086  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:53.115114  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:53.115131  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:55.695591  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:55.708987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:55.709060  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:55.741129  585386 cri.go:89] found id: ""
	I1008 19:10:55.741164  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.741176  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:55.741184  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:55.741250  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:55.777832  585386 cri.go:89] found id: ""
	I1008 19:10:55.777878  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.777892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:55.777901  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:55.777965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:55.811405  585386 cri.go:89] found id: ""
	I1008 19:10:55.811439  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.811452  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:55.811461  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:55.811532  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:55.848821  585386 cri.go:89] found id: ""
	I1008 19:10:55.848855  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.848868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:55.848876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:55.848939  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:55.883922  585386 cri.go:89] found id: ""
	I1008 19:10:55.883949  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.883959  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:55.883969  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:55.884035  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:55.922367  585386 cri.go:89] found id: ""
	I1008 19:10:55.922398  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.922410  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:55.922418  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:55.922485  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:55.955949  585386 cri.go:89] found id: ""
	I1008 19:10:55.955974  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.955982  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:55.955988  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:55.956045  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:55.989141  585386 cri.go:89] found id: ""
	I1008 19:10:55.989174  585386 logs.go:282] 0 containers: []
	W1008 19:10:55.989185  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:55.989199  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:55.989215  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:56.002613  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:56.002652  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:56.073149  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:56.073171  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:56.073185  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:56.149962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:56.150005  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:10:56.198810  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:56.198841  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:56.682535  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.683610  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:55.872286  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:57.872418  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:59.872720  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.447354  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:00.447456  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:10:58.751204  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:10:58.765335  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:10:58.765403  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:10:58.799851  585386 cri.go:89] found id: ""
	I1008 19:10:58.799882  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.799894  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:10:58.799903  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:10:58.799972  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:10:58.835415  585386 cri.go:89] found id: ""
	I1008 19:10:58.835443  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.835453  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:10:58.835459  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:10:58.835506  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:10:58.871046  585386 cri.go:89] found id: ""
	I1008 19:10:58.871073  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.871082  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:10:58.871090  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:10:58.871154  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:10:58.906271  585386 cri.go:89] found id: ""
	I1008 19:10:58.906297  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.906308  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:10:58.906332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:10:58.906395  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:10:58.955354  585386 cri.go:89] found id: ""
	I1008 19:10:58.955384  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.955395  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:10:58.955402  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:10:58.955465  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:10:58.992771  585386 cri.go:89] found id: ""
	I1008 19:10:58.992803  585386 logs.go:282] 0 containers: []
	W1008 19:10:58.992816  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:10:58.992825  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:10:58.992899  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:10:59.030384  585386 cri.go:89] found id: ""
	I1008 19:10:59.030417  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.030431  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:10:59.030440  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:10:59.030504  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:10:59.068445  585386 cri.go:89] found id: ""
	I1008 19:10:59.068472  585386 logs.go:282] 0 containers: []
	W1008 19:10:59.068483  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:10:59.068496  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:10:59.068511  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:10:59.124303  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:10:59.124349  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:10:59.137673  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:10:59.137707  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:10:59.207223  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:10:59.207247  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:10:59.207262  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:10:59.288689  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:10:59.288734  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:00.684164  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:03.182802  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.873903  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.372767  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:02.947088  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:04.948196  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:01.826704  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:01.839821  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:01.839901  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:01.876284  585386 cri.go:89] found id: ""
	I1008 19:11:01.876310  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.876319  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:01.876328  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:01.876393  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:01.908903  585386 cri.go:89] found id: ""
	I1008 19:11:01.908934  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.908946  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:01.908954  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:01.909021  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:01.942655  585386 cri.go:89] found id: ""
	I1008 19:11:01.942684  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.942696  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:01.942704  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:01.942766  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:01.977860  585386 cri.go:89] found id: ""
	I1008 19:11:01.977885  585386 logs.go:282] 0 containers: []
	W1008 19:11:01.977895  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:01.977903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:01.977969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:02.014480  585386 cri.go:89] found id: ""
	I1008 19:11:02.014513  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.014526  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:02.014534  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:02.014600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:02.047565  585386 cri.go:89] found id: ""
	I1008 19:11:02.047599  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.047612  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:02.047620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:02.047682  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:02.081704  585386 cri.go:89] found id: ""
	I1008 19:11:02.081740  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.081753  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:02.081761  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:02.081824  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:02.113703  585386 cri.go:89] found id: ""
	I1008 19:11:02.113744  585386 logs.go:282] 0 containers: []
	W1008 19:11:02.113756  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:02.113767  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:02.113783  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:02.165937  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:02.165974  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:02.179897  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:02.179935  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:02.246440  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:02.246467  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:02.246484  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:02.325432  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:02.325483  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:04.865549  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:04.880377  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:04.880460  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:04.915200  585386 cri.go:89] found id: ""
	I1008 19:11:04.915224  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.915232  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:04.915239  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:04.915286  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:04.963102  585386 cri.go:89] found id: ""
	I1008 19:11:04.963132  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.963141  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:04.963155  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:04.963221  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:04.997543  585386 cri.go:89] found id: ""
	I1008 19:11:04.997572  585386 logs.go:282] 0 containers: []
	W1008 19:11:04.997587  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:04.997596  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:04.997653  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:05.030461  585386 cri.go:89] found id: ""
	I1008 19:11:05.030493  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.030505  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:05.030513  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:05.030593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:05.070097  585386 cri.go:89] found id: ""
	I1008 19:11:05.070134  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.070147  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:05.070156  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:05.070223  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:05.103845  585386 cri.go:89] found id: ""
	I1008 19:11:05.103875  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.103888  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:05.103896  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:05.103961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:05.136474  585386 cri.go:89] found id: ""
	I1008 19:11:05.136511  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.136521  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:05.136528  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:05.136593  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:05.171083  585386 cri.go:89] found id: ""
	I1008 19:11:05.171108  585386 logs.go:282] 0 containers: []
	W1008 19:11:05.171117  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:05.171126  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:05.171139  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:05.224335  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:05.224376  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:05.240176  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:05.240205  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:05.317768  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:05.317799  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:05.317814  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:05.400527  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:05.400560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:05.683195  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.184305  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:06.872641  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:08.872811  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.447814  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:09.948377  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:07.937830  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:07.953255  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:07.953326  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:07.989089  585386 cri.go:89] found id: ""
	I1008 19:11:07.989118  585386 logs.go:282] 0 containers: []
	W1008 19:11:07.989127  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:07.989135  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:07.989198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:08.026710  585386 cri.go:89] found id: ""
	I1008 19:11:08.026745  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.026755  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:08.026761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:08.026815  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:08.059225  585386 cri.go:89] found id: ""
	I1008 19:11:08.059253  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.059262  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:08.059311  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:08.059366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:08.091543  585386 cri.go:89] found id: ""
	I1008 19:11:08.091579  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.091592  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:08.091600  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:08.091669  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:08.125395  585386 cri.go:89] found id: ""
	I1008 19:11:08.125432  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.125444  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:08.125451  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:08.125531  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:08.160668  585386 cri.go:89] found id: ""
	I1008 19:11:08.160695  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.160704  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:08.160711  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:08.160784  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:08.196365  585386 cri.go:89] found id: ""
	I1008 19:11:08.196390  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.196399  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:08.196404  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:08.196452  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:08.229377  585386 cri.go:89] found id: ""
	I1008 19:11:08.229412  585386 logs.go:282] 0 containers: []
	W1008 19:11:08.229424  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:08.229436  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:08.229451  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:08.267393  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:08.267424  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:08.322552  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:08.322588  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:08.336159  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:08.336194  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:08.408866  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:08.408889  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:08.408918  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:10.988314  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:11.002167  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:11.002246  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:11.037925  585386 cri.go:89] found id: ""
	I1008 19:11:11.037956  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.037965  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:11.037971  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:11.038032  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:11.076566  585386 cri.go:89] found id: ""
	I1008 19:11:11.076599  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.076611  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:11.076617  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:11.076671  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:11.117873  585386 cri.go:89] found id: ""
	I1008 19:11:11.117900  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.117908  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:11.117915  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:11.117965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:11.151165  585386 cri.go:89] found id: ""
	I1008 19:11:11.151192  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.151201  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:11.151208  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:11.151270  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:11.185099  585386 cri.go:89] found id: ""
	I1008 19:11:11.185125  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.185141  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:11.185148  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:11.185213  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:11.218758  585386 cri.go:89] found id: ""
	I1008 19:11:11.218790  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.218802  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:11.218811  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:11.218915  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:11.254901  585386 cri.go:89] found id: ""
	I1008 19:11:11.254929  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.254940  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:11.254972  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:11.255038  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:11.288856  585386 cri.go:89] found id: ""
	I1008 19:11:11.288888  585386 logs.go:282] 0 containers: []
	W1008 19:11:11.288909  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:11.288920  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:11.288936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:11.346073  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:11.346115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:11.370366  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:11.370395  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:11.444895  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:11.444919  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:11.444936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:11.522448  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:11.522485  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:10.186012  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.682829  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:11.374597  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:13.872241  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:12.447966  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.448396  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:14.060509  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:14.074531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:14.074617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:14.109059  585386 cri.go:89] found id: ""
	I1008 19:11:14.109086  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.109096  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:14.109104  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:14.109169  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:14.144039  585386 cri.go:89] found id: ""
	I1008 19:11:14.144077  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.144089  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:14.144096  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:14.144149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:14.176492  585386 cri.go:89] found id: ""
	I1008 19:11:14.176527  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.176539  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:14.176547  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:14.176608  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:14.212770  585386 cri.go:89] found id: ""
	I1008 19:11:14.212807  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.212818  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:14.212826  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:14.212890  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:14.246457  585386 cri.go:89] found id: ""
	I1008 19:11:14.246488  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.246501  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:14.246509  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:14.246578  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:14.277873  585386 cri.go:89] found id: ""
	I1008 19:11:14.277903  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.277913  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:14.277921  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:14.277985  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:14.309833  585386 cri.go:89] found id: ""
	I1008 19:11:14.309870  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.309881  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:14.309888  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:14.309956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:14.342237  585386 cri.go:89] found id: ""
	I1008 19:11:14.342263  585386 logs.go:282] 0 containers: []
	W1008 19:11:14.342276  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:14.342288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:14.342304  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:14.394603  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:14.394637  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:14.408822  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:14.408855  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:14.475964  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:14.475996  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:14.476011  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:14.558247  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:14.558287  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:14.683559  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.185276  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.372851  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:18.872479  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:16.947677  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:19.449701  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:17.100153  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:17.130964  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:17.131044  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:17.185653  585386 cri.go:89] found id: ""
	I1008 19:11:17.185683  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.185695  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:17.185702  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:17.185756  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:17.217309  585386 cri.go:89] found id: ""
	I1008 19:11:17.217335  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.217345  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:17.217353  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:17.217412  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:17.250016  585386 cri.go:89] found id: ""
	I1008 19:11:17.250060  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.250069  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:17.250074  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:17.250133  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:17.288507  585386 cri.go:89] found id: ""
	I1008 19:11:17.288539  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.288549  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:17.288556  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:17.288614  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:17.321181  585386 cri.go:89] found id: ""
	I1008 19:11:17.321218  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.321231  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:17.321239  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:17.321294  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:17.353799  585386 cri.go:89] found id: ""
	I1008 19:11:17.353826  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.353835  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:17.353843  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:17.353893  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:17.386438  585386 cri.go:89] found id: ""
	I1008 19:11:17.386464  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.386472  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:17.386478  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:17.386529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:17.422339  585386 cri.go:89] found id: ""
	I1008 19:11:17.422366  585386 logs.go:282] 0 containers: []
	W1008 19:11:17.422374  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:17.422383  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:17.422396  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:17.500962  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:17.500997  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:17.538559  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:17.538587  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:17.587482  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:17.587513  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:17.600549  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:17.600577  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:17.670125  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.171097  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:20.185620  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:20.185698  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:20.224221  585386 cri.go:89] found id: ""
	I1008 19:11:20.224248  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.224256  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:20.224263  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:20.224325  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:20.257540  585386 cri.go:89] found id: ""
	I1008 19:11:20.257572  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.257585  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:20.257593  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:20.257657  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:20.291537  585386 cri.go:89] found id: ""
	I1008 19:11:20.291569  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.291581  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:20.291590  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:20.291656  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:20.330186  585386 cri.go:89] found id: ""
	I1008 19:11:20.330214  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.330225  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:20.330234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:20.330298  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:20.363283  585386 cri.go:89] found id: ""
	I1008 19:11:20.363315  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.363325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:20.363332  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:20.363387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:20.398073  585386 cri.go:89] found id: ""
	I1008 19:11:20.398120  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.398130  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:20.398136  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:20.398191  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:20.431544  585386 cri.go:89] found id: ""
	I1008 19:11:20.431576  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.431588  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:20.431597  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:20.431663  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:20.465085  585386 cri.go:89] found id: ""
	I1008 19:11:20.465111  585386 logs.go:282] 0 containers: []
	W1008 19:11:20.465121  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:20.465131  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:20.465144  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:20.516925  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:20.516964  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:20.530098  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:20.530122  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:20.604930  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:20.604956  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:20.604971  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:20.683963  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:20.683996  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:19.682652  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.683209  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.684681  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.371629  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.373290  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:21.947319  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:24.446685  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:23.224801  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:23.237997  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:23.238077  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:23.272638  585386 cri.go:89] found id: ""
	I1008 19:11:23.272675  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.272688  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:23.272696  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:23.272758  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:23.306145  585386 cri.go:89] found id: ""
	I1008 19:11:23.306178  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.306188  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:23.306194  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:23.306258  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:23.338119  585386 cri.go:89] found id: ""
	I1008 19:11:23.338148  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.338158  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:23.338164  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:23.338226  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:23.372793  585386 cri.go:89] found id: ""
	I1008 19:11:23.372821  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.372832  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:23.372840  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:23.372905  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:23.409322  585386 cri.go:89] found id: ""
	I1008 19:11:23.409351  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.409361  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:23.409367  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:23.409431  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:23.443415  585386 cri.go:89] found id: ""
	I1008 19:11:23.443450  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.443461  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:23.443470  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:23.443527  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:23.476650  585386 cri.go:89] found id: ""
	I1008 19:11:23.476683  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.476691  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:23.476698  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:23.476763  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:23.510498  585386 cri.go:89] found id: ""
	I1008 19:11:23.510530  585386 logs.go:282] 0 containers: []
	W1008 19:11:23.510544  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:23.510556  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:23.510572  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:23.576112  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:23.576139  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:23.576153  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:23.653032  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:23.653084  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:23.691127  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:23.691165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:23.742768  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:23.742804  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.256888  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:26.269633  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:26.269711  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:26.306436  585386 cri.go:89] found id: ""
	I1008 19:11:26.306468  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.306482  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:26.306488  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:26.306557  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:26.341135  585386 cri.go:89] found id: ""
	I1008 19:11:26.341175  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.341187  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:26.341196  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:26.341281  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:26.376149  585386 cri.go:89] found id: ""
	I1008 19:11:26.376178  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.376186  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:26.376192  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:26.376244  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:26.410461  585386 cri.go:89] found id: ""
	I1008 19:11:26.410496  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.410507  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:26.410516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:26.410599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:26.448773  585386 cri.go:89] found id: ""
	I1008 19:11:26.448796  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.448804  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:26.448810  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:26.448866  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:26.481467  585386 cri.go:89] found id: ""
	I1008 19:11:26.481491  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.481500  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:26.481505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:26.481554  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:26.513212  585386 cri.go:89] found id: ""
	I1008 19:11:26.513239  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.513248  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:26.513263  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:26.513312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:26.553073  585386 cri.go:89] found id: ""
	I1008 19:11:26.553104  585386 logs.go:282] 0 containers: []
	W1008 19:11:26.553112  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:26.553121  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:26.553142  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:26.567242  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:26.567278  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:26.644047  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:26.644072  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:26.644091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:26.183070  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.185526  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:25.872866  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.371245  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.371878  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.447559  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:28.948355  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:30.949170  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:26.726025  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:26.726064  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:26.764261  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:26.764296  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.318376  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:29.331835  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:29.331922  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:29.368664  585386 cri.go:89] found id: ""
	I1008 19:11:29.368697  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.368710  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:29.368718  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:29.368781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:29.401527  585386 cri.go:89] found id: ""
	I1008 19:11:29.401562  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.401575  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:29.401583  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:29.401645  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:29.434829  585386 cri.go:89] found id: ""
	I1008 19:11:29.434865  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.434878  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:29.434886  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:29.434953  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:29.470595  585386 cri.go:89] found id: ""
	I1008 19:11:29.470630  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.470642  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:29.470650  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:29.470713  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:29.503077  585386 cri.go:89] found id: ""
	I1008 19:11:29.503109  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.503121  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:29.503129  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:29.503190  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:29.536418  585386 cri.go:89] found id: ""
	I1008 19:11:29.536445  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.536454  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:29.536460  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:29.536510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:29.570496  585386 cri.go:89] found id: ""
	I1008 19:11:29.570525  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.570538  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:29.570545  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:29.570622  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:29.604520  585386 cri.go:89] found id: ""
	I1008 19:11:29.604558  585386 logs.go:282] 0 containers: []
	W1008 19:11:29.604570  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:29.604582  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:29.604598  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:29.649254  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:29.649299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:29.701842  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:29.701877  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:29.715670  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:29.715698  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:29.780760  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:29.780787  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:29.780801  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:30.683714  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.182628  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.373119  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:34.872336  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:33.447847  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:35.947756  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:32.356975  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:32.370275  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:32.370366  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:32.404808  585386 cri.go:89] found id: ""
	I1008 19:11:32.404839  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.404850  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:32.404859  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:32.404920  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:32.438751  585386 cri.go:89] found id: ""
	I1008 19:11:32.438789  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.438806  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:32.438814  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:32.438882  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:32.472829  585386 cri.go:89] found id: ""
	I1008 19:11:32.472859  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.472869  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:32.472876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:32.472936  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:32.506928  585386 cri.go:89] found id: ""
	I1008 19:11:32.506961  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.506974  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:32.506982  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:32.507049  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:32.541009  585386 cri.go:89] found id: ""
	I1008 19:11:32.541045  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.541057  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:32.541064  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:32.541127  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:32.576438  585386 cri.go:89] found id: ""
	I1008 19:11:32.576467  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.576475  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:32.576482  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:32.576546  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:32.608748  585386 cri.go:89] found id: ""
	I1008 19:11:32.608777  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.608786  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:32.608799  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:32.608861  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:32.640037  585386 cri.go:89] found id: ""
	I1008 19:11:32.640063  585386 logs.go:282] 0 containers: []
	W1008 19:11:32.640071  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:32.640079  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:32.640091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:32.692351  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:32.692386  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:32.705898  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:32.705925  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:32.771478  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:32.771505  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:32.771521  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:32.847491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:32.847529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.390756  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:35.403887  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:35.403960  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:35.436764  585386 cri.go:89] found id: ""
	I1008 19:11:35.436795  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.436814  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:35.436823  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:35.436889  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:35.471706  585386 cri.go:89] found id: ""
	I1008 19:11:35.471741  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.471753  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:35.471761  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:35.471831  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:35.504468  585386 cri.go:89] found id: ""
	I1008 19:11:35.504499  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.504511  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:35.504519  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:35.504579  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:35.538863  585386 cri.go:89] found id: ""
	I1008 19:11:35.538889  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.538897  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:35.538903  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:35.538956  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:35.572923  585386 cri.go:89] found id: ""
	I1008 19:11:35.572960  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.572973  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:35.572981  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:35.573050  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:35.607898  585386 cri.go:89] found id: ""
	I1008 19:11:35.607929  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.607941  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:35.607950  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:35.608013  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:35.641444  585386 cri.go:89] found id: ""
	I1008 19:11:35.641483  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.641497  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:35.641505  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:35.641574  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:35.675641  585386 cri.go:89] found id: ""
	I1008 19:11:35.675672  585386 logs.go:282] 0 containers: []
	W1008 19:11:35.675682  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:35.675691  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:35.675702  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:35.749789  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:35.749831  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:35.787373  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:35.787403  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:35.840600  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:35.840640  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:35.855237  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:35.855266  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:35.925902  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:35.183021  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.682254  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:37.371644  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:39.372270  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.447549  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:40.946928  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:38.426385  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:38.439151  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:38.439235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:38.472394  585386 cri.go:89] found id: ""
	I1008 19:11:38.472423  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.472440  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:38.472448  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:38.472501  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:38.508031  585386 cri.go:89] found id: ""
	I1008 19:11:38.508057  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.508066  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:38.508072  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:38.508123  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:38.543737  585386 cri.go:89] found id: ""
	I1008 19:11:38.543765  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.543774  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:38.543780  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:38.543849  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:38.583860  585386 cri.go:89] found id: ""
	I1008 19:11:38.583889  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.583900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:38.583908  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:38.583969  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:38.622871  585386 cri.go:89] found id: ""
	I1008 19:11:38.622906  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.622918  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:38.622926  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:38.622987  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:38.660614  585386 cri.go:89] found id: ""
	I1008 19:11:38.660639  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.660648  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:38.660654  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:38.660712  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:38.695748  585386 cri.go:89] found id: ""
	I1008 19:11:38.695774  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.695782  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:38.695788  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:38.695850  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:38.726171  585386 cri.go:89] found id: ""
	I1008 19:11:38.726202  585386 logs.go:282] 0 containers: []
	W1008 19:11:38.726211  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:38.726224  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:38.726240  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:38.739675  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:38.739703  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:38.805919  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:38.805943  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:38.805958  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:38.883902  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:38.883936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:38.924468  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:38.924509  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:41.479544  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:41.492253  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:41.492327  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:41.526886  585386 cri.go:89] found id: ""
	I1008 19:11:41.526919  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.526929  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:41.526937  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:41.526990  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:41.561647  585386 cri.go:89] found id: ""
	I1008 19:11:41.561672  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.561681  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:41.561686  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:41.561737  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:41.596189  585386 cri.go:89] found id: ""
	I1008 19:11:41.596219  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.596228  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:41.596234  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:41.596295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:41.627790  585386 cri.go:89] found id: ""
	I1008 19:11:41.627831  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.627840  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:41.627846  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:41.627912  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.182928  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.873545  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.372751  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:42.947699  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:44.949106  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:41.660430  585386 cri.go:89] found id: ""
	I1008 19:11:41.660454  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.660463  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:41.660469  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:41.660530  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:41.699475  585386 cri.go:89] found id: ""
	I1008 19:11:41.699501  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.699510  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:41.699516  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:41.699577  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:41.737560  585386 cri.go:89] found id: ""
	I1008 19:11:41.737591  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.737603  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:41.737611  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:41.737673  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:41.775526  585386 cri.go:89] found id: ""
	I1008 19:11:41.775551  585386 logs.go:282] 0 containers: []
	W1008 19:11:41.775560  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:41.775569  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:41.775585  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:41.788982  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:41.789015  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:41.861833  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:41.861854  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:41.861866  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:41.943482  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:41.943515  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:41.983308  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:41.983342  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.538073  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:44.551565  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:44.551636  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:44.590175  585386 cri.go:89] found id: ""
	I1008 19:11:44.590206  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.590219  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:44.590226  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:44.590297  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:44.622401  585386 cri.go:89] found id: ""
	I1008 19:11:44.622434  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.622446  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:44.622454  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:44.622529  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:44.655502  585386 cri.go:89] found id: ""
	I1008 19:11:44.655536  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.655546  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:44.655553  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:44.655603  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:44.692078  585386 cri.go:89] found id: ""
	I1008 19:11:44.692108  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.692117  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:44.692123  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:44.692175  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:44.725282  585386 cri.go:89] found id: ""
	I1008 19:11:44.725310  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.725318  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:44.725324  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:44.725378  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:44.763080  585386 cri.go:89] found id: ""
	I1008 19:11:44.763113  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.763126  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:44.763132  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:44.763192  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:44.800193  585386 cri.go:89] found id: ""
	I1008 19:11:44.800222  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.800234  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:44.800242  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:44.800312  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:44.837676  585386 cri.go:89] found id: ""
	I1008 19:11:44.837708  585386 logs.go:282] 0 containers: []
	W1008 19:11:44.837720  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:44.837732  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:44.837749  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:44.894684  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:44.894719  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:44.909714  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:44.909747  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:44.976219  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:44.976245  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:44.976261  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:45.060104  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:45.060146  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:44.684067  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.182248  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.183397  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:46.871983  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:48.872101  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:47.447284  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:49.448275  585014 pod_ready.go:103] pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.949171  585014 pod_ready.go:82] duration metric: took 4m0.008012606s for pod "metrics-server-6867b74b74-4d48d" in "kube-system" namespace to be "Ready" ...
	E1008 19:11:50.949202  585014 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:11:50.949213  585014 pod_ready.go:39] duration metric: took 4m6.974004451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:11:50.949249  585014 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:11:50.949290  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.949351  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.998560  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:50.998584  585014 cri.go:89] found id: ""
	I1008 19:11:50.998591  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:11:50.998649  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.003407  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:51.003490  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.601484  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:47.615243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:47.615314  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:47.649597  585386 cri.go:89] found id: ""
	I1008 19:11:47.649627  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.649637  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:47.649647  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:47.649710  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:47.683135  585386 cri.go:89] found id: ""
	I1008 19:11:47.683162  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.683178  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:47.683185  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:47.683243  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:47.717509  585386 cri.go:89] found id: ""
	I1008 19:11:47.717536  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.717545  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:47.717552  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:47.717621  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:47.752586  585386 cri.go:89] found id: ""
	I1008 19:11:47.752616  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.752628  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:47.752636  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:47.752703  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:47.789353  585386 cri.go:89] found id: ""
	I1008 19:11:47.789386  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.789400  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:47.789408  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:47.789476  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:47.822848  585386 cri.go:89] found id: ""
	I1008 19:11:47.822884  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.822896  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:47.822905  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:47.822965  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:47.855752  585386 cri.go:89] found id: ""
	I1008 19:11:47.855787  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.855798  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:47.855806  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:47.855876  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:47.893243  585386 cri.go:89] found id: ""
	I1008 19:11:47.893270  585386 logs.go:282] 0 containers: []
	W1008 19:11:47.893279  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:47.893288  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:47.893299  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:47.945961  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:47.945989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:47.960067  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:47.960091  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:48.025791  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:48.025822  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:48.025839  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:48.101402  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:48.101445  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:50.642373  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:50.655772  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:50.655852  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:50.692344  585386 cri.go:89] found id: ""
	I1008 19:11:50.692372  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.692380  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:50.692387  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:50.692443  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:50.726357  585386 cri.go:89] found id: ""
	I1008 19:11:50.726387  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.726395  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:50.726401  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:50.726464  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:50.759378  585386 cri.go:89] found id: ""
	I1008 19:11:50.759411  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.759422  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:50.759429  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:50.759494  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:50.792745  585386 cri.go:89] found id: ""
	I1008 19:11:50.792783  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.792796  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:50.792805  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:50.792871  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:50.825663  585386 cri.go:89] found id: ""
	I1008 19:11:50.825697  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.825709  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:50.825717  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:50.825796  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:50.858935  585386 cri.go:89] found id: ""
	I1008 19:11:50.858970  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.858981  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:50.858987  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:50.859054  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:50.895128  585386 cri.go:89] found id: ""
	I1008 19:11:50.895158  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.895166  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:50.895172  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:50.895235  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:50.947216  585386 cri.go:89] found id: ""
	I1008 19:11:50.947250  585386 logs.go:282] 0 containers: []
	W1008 19:11:50.947262  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:50.947272  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:50.947292  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:51.021447  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:51.021474  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.021491  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:51.118133  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:51.118170  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:51.165495  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:51.165532  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:51.221306  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:51.221333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:51.183611  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:53.683418  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:50.872692  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:52.873320  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:55.372722  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:51.049315  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:51.049343  585014 cri.go:89] found id: ""
	I1008 19:11:51.049353  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:11:51.049411  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.055212  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:51.055281  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:51.101271  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.101292  585014 cri.go:89] found id: ""
	I1008 19:11:51.101300  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:11:51.101360  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.105902  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:51.105966  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:51.150355  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.150390  585014 cri.go:89] found id: ""
	I1008 19:11:51.150402  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:11:51.150468  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.155116  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:51.155193  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:51.197754  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:51.197779  585014 cri.go:89] found id: ""
	I1008 19:11:51.197790  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:11:51.197846  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.201957  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:51.202023  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:51.239982  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:51.240001  585014 cri.go:89] found id: ""
	I1008 19:11:51.240009  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:11:51.240064  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.244580  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:51.244645  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:51.280099  585014 cri.go:89] found id: ""
	I1008 19:11:51.280126  585014 logs.go:282] 0 containers: []
	W1008 19:11:51.280137  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:51.280144  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:11:51.280205  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:11:51.323467  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:51.323508  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:51.323514  585014 cri.go:89] found id: ""
	I1008 19:11:51.323525  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:11:51.323676  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.328091  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:11:51.332113  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:51.332139  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:11:51.455430  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:11:51.455463  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:11:51.492792  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:11:51.492824  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:11:51.533732  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:51.533768  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:52.085919  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:11:52.085972  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:11:52.120874  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:52.120912  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:11:52.163961  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164188  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.164330  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.164489  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.195681  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:52.195716  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:52.210569  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:11:52.210601  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:11:52.256667  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:11:52.256700  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:11:52.303627  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:11:52.303685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:11:52.340250  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:11:52.340279  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:11:52.402179  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:11:52.402213  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:11:52.440288  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:11:52.440326  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:52.478952  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.478979  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:11:52.479043  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:11:52.479060  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479068  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:11:52.479077  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:11:52.479084  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:11:52.479092  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:11:52.479101  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:11:53.737143  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:53.750760  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:53.750833  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:53.784022  585386 cri.go:89] found id: ""
	I1008 19:11:53.784058  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.784070  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:53.784078  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:53.784135  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:53.818753  585386 cri.go:89] found id: ""
	I1008 19:11:53.818785  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.818804  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:53.818812  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:53.818879  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:53.852997  585386 cri.go:89] found id: ""
	I1008 19:11:53.853030  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.853042  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:53.853049  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:53.853115  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:53.887826  585386 cri.go:89] found id: ""
	I1008 19:11:53.887856  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.887868  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:53.887876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:53.887992  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:53.923205  585386 cri.go:89] found id: ""
	I1008 19:11:53.923229  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.923237  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:53.923243  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:53.923295  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:53.955680  585386 cri.go:89] found id: ""
	I1008 19:11:53.955706  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.955715  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:53.955721  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:53.955772  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:53.998488  585386 cri.go:89] found id: ""
	I1008 19:11:53.998520  585386 logs.go:282] 0 containers: []
	W1008 19:11:53.998529  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:53.998535  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:53.998599  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:54.036109  585386 cri.go:89] found id: ""
	I1008 19:11:54.036147  585386 logs.go:282] 0 containers: []
	W1008 19:11:54.036160  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:54.036171  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:54.036188  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:54.086936  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:54.086978  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:54.100911  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:54.100939  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:54.171361  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:54.171390  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:54.171405  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:54.261117  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:54.261165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:56.182942  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:58.184307  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:57.373902  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:59.872567  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:11:56.801628  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:56.815072  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:56.815149  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:56.853394  585386 cri.go:89] found id: ""
	I1008 19:11:56.853424  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.853435  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:56.853443  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:56.853510  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:56.887436  585386 cri.go:89] found id: ""
	I1008 19:11:56.887463  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.887473  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:56.887479  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:56.887542  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:11:56.924102  585386 cri.go:89] found id: ""
	I1008 19:11:56.924130  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.924139  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:11:56.924146  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:11:56.924198  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:11:56.957596  585386 cri.go:89] found id: ""
	I1008 19:11:56.957627  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.957637  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:11:56.957643  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:11:56.957707  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:11:56.991432  585386 cri.go:89] found id: ""
	I1008 19:11:56.991467  585386 logs.go:282] 0 containers: []
	W1008 19:11:56.991481  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:11:56.991489  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:11:56.991559  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:11:57.027680  585386 cri.go:89] found id: ""
	I1008 19:11:57.027705  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.027714  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:11:57.027720  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:11:57.027780  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:11:57.062030  585386 cri.go:89] found id: ""
	I1008 19:11:57.062063  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.062073  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:11:57.062079  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:11:57.062151  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:11:57.095548  585386 cri.go:89] found id: ""
	I1008 19:11:57.095582  585386 logs.go:282] 0 containers: []
	W1008 19:11:57.095603  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:11:57.095617  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:11:57.095633  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:11:57.182122  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:11:57.182165  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:11:57.222879  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:11:57.222909  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:11:57.277293  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:11:57.277333  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:11:57.292011  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:11:57.292037  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:11:57.407987  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:11:59.908996  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:11:59.921876  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:11:59.921947  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:11:59.958033  585386 cri.go:89] found id: ""
	I1008 19:11:59.958063  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.958072  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:11:59.958079  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:11:59.958144  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:11:59.992264  585386 cri.go:89] found id: ""
	I1008 19:11:59.992304  585386 logs.go:282] 0 containers: []
	W1008 19:11:59.992317  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:11:59.992325  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:11:59.992390  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:00.026160  585386 cri.go:89] found id: ""
	I1008 19:12:00.026195  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.026207  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:00.026216  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:00.026284  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:00.058660  585386 cri.go:89] found id: ""
	I1008 19:12:00.058692  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.058705  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:00.058713  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:00.058765  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:00.093815  585386 cri.go:89] found id: ""
	I1008 19:12:00.093847  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.093856  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:00.093863  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:00.093924  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:00.125635  585386 cri.go:89] found id: ""
	I1008 19:12:00.125660  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.125670  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:00.125683  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:00.125744  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:00.158699  585386 cri.go:89] found id: ""
	I1008 19:12:00.158734  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.158744  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:00.158751  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:00.158807  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:00.199337  585386 cri.go:89] found id: ""
	I1008 19:12:00.199373  585386 logs.go:282] 0 containers: []
	W1008 19:12:00.199386  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:00.199398  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:00.199413  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:00.235505  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:00.235541  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:00.286079  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:00.286115  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:00.299915  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:00.299948  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:00.379176  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:00.379197  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:00.379213  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:00.683230  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:03.184294  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.372439  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:04.871327  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:02.480085  585014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.498401  585014 api_server.go:72] duration metric: took 4m26.226421652s to wait for apiserver process to appear ...
	I1008 19:12:02.498433  585014 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:12:02.498479  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.498544  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:02.533531  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:02.533563  585014 cri.go:89] found id: ""
	I1008 19:12:02.533575  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:02.533643  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.537914  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:02.537985  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:02.579011  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:02.579039  585014 cri.go:89] found id: ""
	I1008 19:12:02.579049  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:02.579111  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.583628  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:02.583695  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:02.625038  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.625065  585014 cri.go:89] found id: ""
	I1008 19:12:02.625075  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:02.625138  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.629262  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:02.629331  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:02.662964  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:02.662988  585014 cri.go:89] found id: ""
	I1008 19:12:02.662997  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:02.663052  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.666955  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:02.667013  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:02.704552  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:02.704578  585014 cri.go:89] found id: ""
	I1008 19:12:02.704589  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:02.704640  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.708910  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:02.708962  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:02.743196  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.743220  585014 cri.go:89] found id: ""
	I1008 19:12:02.743229  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:02.743276  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.747488  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:02.747563  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:02.789367  585014 cri.go:89] found id: ""
	I1008 19:12:02.789405  585014 logs.go:282] 0 containers: []
	W1008 19:12:02.789418  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:02.789426  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:02.789479  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:02.828607  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:02.828640  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.828646  585014 cri.go:89] found id: ""
	I1008 19:12:02.828656  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:02.828723  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.832981  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:02.837258  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:02.837284  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:02.874214  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:02.874249  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:02.925844  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:02.925879  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:02.963715  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:02.963744  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.009069  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.009102  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:03.046628  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.046816  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.046947  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.047129  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.080027  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.080068  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:03.203192  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:03.203233  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:03.254645  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:03.254681  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:03.300881  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:03.300918  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:03.347403  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.347440  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.802754  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.802801  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.816658  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:03.816695  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:03.873630  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:03.873670  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:03.910834  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.910862  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:03.910932  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:03.910946  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910955  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:03.910972  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:03.910983  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:03.910994  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:03.911006  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:02.964745  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:02.977313  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:02.977380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:03.018618  585386 cri.go:89] found id: ""
	I1008 19:12:03.018651  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.018663  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:03.018671  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:03.018735  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:03.054514  585386 cri.go:89] found id: ""
	I1008 19:12:03.054541  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.054551  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:03.054559  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:03.054624  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:03.100338  585386 cri.go:89] found id: ""
	I1008 19:12:03.100373  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.100384  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:03.100392  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:03.100455  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:03.150845  585386 cri.go:89] found id: ""
	I1008 19:12:03.150887  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.150900  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:03.150909  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:03.150982  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:03.198496  585386 cri.go:89] found id: ""
	I1008 19:12:03.198534  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.198546  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:03.198554  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:03.198617  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:03.239529  585386 cri.go:89] found id: ""
	I1008 19:12:03.239558  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.239568  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:03.239574  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:03.239626  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:03.275510  585386 cri.go:89] found id: ""
	I1008 19:12:03.275548  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.275560  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:03.275568  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:03.275629  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:03.317335  585386 cri.go:89] found id: ""
	I1008 19:12:03.317365  585386 logs.go:282] 0 containers: []
	W1008 19:12:03.317376  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:03.317387  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:03.317402  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:03.334327  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:03.334360  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:03.409948  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:03.409977  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:03.409994  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:03.488491  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:03.488527  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:03.525569  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:03.525599  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.076256  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:06.090508  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:06.090576  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:06.125712  585386 cri.go:89] found id: ""
	I1008 19:12:06.125742  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.125750  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:06.125757  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:06.125811  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:06.161999  585386 cri.go:89] found id: ""
	I1008 19:12:06.162029  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.162042  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:06.162050  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:06.162118  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:06.197267  585386 cri.go:89] found id: ""
	I1008 19:12:06.197296  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.197307  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:06.197316  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:06.197387  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:06.231674  585386 cri.go:89] found id: ""
	I1008 19:12:06.231706  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.231717  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:06.231725  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:06.231799  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:06.265648  585386 cri.go:89] found id: ""
	I1008 19:12:06.265676  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.265687  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:06.265706  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:06.265781  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:06.299467  585386 cri.go:89] found id: ""
	I1008 19:12:06.299502  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.299515  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:06.299531  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:06.299600  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:06.331673  585386 cri.go:89] found id: ""
	I1008 19:12:06.331700  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.331708  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:06.331714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:06.331776  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:06.365251  585386 cri.go:89] found id: ""
	I1008 19:12:06.365285  585386 logs.go:282] 0 containers: []
	W1008 19:12:06.365297  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:06.365309  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:06.365324  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:06.446674  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:06.446709  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:06.487330  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:06.487362  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:06.537682  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:06.537718  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:06.551596  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:06.551632  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:06.617480  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:05.682916  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:07.683273  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:06.872011  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:08.873682  585096 pod_ready.go:103] pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:09.117654  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:09.134173  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:09.134254  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:09.180643  585386 cri.go:89] found id: ""
	I1008 19:12:09.180690  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.180703  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:12:09.180711  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:09.180774  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:09.215591  585386 cri.go:89] found id: ""
	I1008 19:12:09.215621  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.215630  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:12:09.215636  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:09.215690  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:09.254307  585386 cri.go:89] found id: ""
	I1008 19:12:09.254352  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.254365  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:12:09.254372  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:09.254434  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:09.289010  585386 cri.go:89] found id: ""
	I1008 19:12:09.289040  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.289051  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:12:09.289058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:09.289129  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:09.323287  585386 cri.go:89] found id: ""
	I1008 19:12:09.323316  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.323325  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:12:09.323338  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:09.323408  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:09.357008  585386 cri.go:89] found id: ""
	I1008 19:12:09.357038  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.357049  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:12:09.357058  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:09.357121  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:09.392667  585386 cri.go:89] found id: ""
	I1008 19:12:09.392695  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.392707  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:09.392714  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:12:09.392779  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:12:09.426662  585386 cri.go:89] found id: ""
	I1008 19:12:09.426703  585386 logs.go:282] 0 containers: []
	W1008 19:12:09.426716  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:12:09.426728  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:09.426743  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:12:09.477933  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:09.477965  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:09.491842  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:09.491874  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:12:09.558565  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:12:09.558593  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:09.558607  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:09.636628  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:12:09.636669  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:09.684055  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.182786  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:14.186868  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:12.176195  585386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:12:12.190381  585386 kubeadm.go:597] duration metric: took 4m2.309906822s to restartPrimaryControlPlane
	W1008 19:12:12.190467  585386 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:12.190495  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.236422  585386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.045906129s)
	I1008 19:12:14.236515  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:14.252511  585386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:14.265214  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:14.275762  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:14.275783  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:14.275825  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:12:14.285363  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:14.285409  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:14.295884  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:12:14.305239  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:14.305281  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:14.314550  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.323647  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:14.323747  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:14.333811  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:12:14.342808  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:14.342864  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:14.352182  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:14.424497  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:12:14.424782  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:14.579285  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:14.579561  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:14.579709  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:12:14.757071  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:10.866893  585096 pod_ready.go:82] duration metric: took 4m0.000956825s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:10.866937  585096 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-pfc2c" in "kube-system" namespace to be "Ready" (will not retry!)
	I1008 19:12:10.866961  585096 pod_ready.go:39] duration metric: took 4m15.184400794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:10.866992  585096 kubeadm.go:597] duration metric: took 4m23.829186185s to restartPrimaryControlPlane
	W1008 19:12:10.867049  585096 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 19:12:10.867092  585096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:12:14.758719  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:14.758841  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:14.758950  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:14.759069  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:14.759179  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:14.759313  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:14.759398  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:14.759957  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:14.760840  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:14.761668  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:14.762521  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:14.762759  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:14.762844  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:15.135727  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:15.256880  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:15.399976  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:15.473191  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:15.488121  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:15.489263  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:15.489341  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:15.653179  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:13.911944  585014 api_server.go:253] Checking apiserver healthz at https://192.168.72.183:8443/healthz ...
	I1008 19:12:13.917530  585014 api_server.go:279] https://192.168.72.183:8443/healthz returned 200:
	ok
	I1008 19:12:13.918513  585014 api_server.go:141] control plane version: v1.31.1
	I1008 19:12:13.918537  585014 api_server.go:131] duration metric: took 11.420096691s to wait for apiserver health ...
	I1008 19:12:13.918546  585014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:12:13.918570  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:13.918621  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:13.957026  585014 cri.go:89] found id: "8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:13.957048  585014 cri.go:89] found id: ""
	I1008 19:12:13.957057  585014 logs.go:282] 1 containers: [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f]
	I1008 19:12:13.957114  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:13.961553  585014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:13.961611  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:13.996466  585014 cri.go:89] found id: "ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:13.996497  585014 cri.go:89] found id: ""
	I1008 19:12:13.996508  585014 logs.go:282] 1 containers: [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f]
	I1008 19:12:13.996570  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.000972  585014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:14.001036  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:14.034888  585014 cri.go:89] found id: "b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.034917  585014 cri.go:89] found id: ""
	I1008 19:12:14.034929  585014 logs.go:282] 1 containers: [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5]
	I1008 19:12:14.034989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.039145  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:14.039216  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:14.074109  585014 cri.go:89] found id: "639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:14.074134  585014 cri.go:89] found id: ""
	I1008 19:12:14.074145  585014 logs.go:282] 1 containers: [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f]
	I1008 19:12:14.074202  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.078291  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:14.078371  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:14.113375  585014 cri.go:89] found id: "44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:14.113403  585014 cri.go:89] found id: ""
	I1008 19:12:14.113413  585014 logs.go:282] 1 containers: [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2]
	I1008 19:12:14.113475  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.117909  585014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:14.118002  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:14.153800  585014 cri.go:89] found id: "2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:14.153823  585014 cri.go:89] found id: ""
	I1008 19:12:14.153833  585014 logs.go:282] 1 containers: [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674]
	I1008 19:12:14.153898  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.158233  585014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:14.158302  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:14.195093  585014 cri.go:89] found id: ""
	I1008 19:12:14.195123  585014 logs.go:282] 0 containers: []
	W1008 19:12:14.195133  585014 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:14.195142  585014 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:14.195203  585014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:14.230894  585014 cri.go:89] found id: "6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:14.230917  585014 cri.go:89] found id: "ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:14.230921  585014 cri.go:89] found id: ""
	I1008 19:12:14.230929  585014 logs.go:282] 2 containers: [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda]
	I1008 19:12:14.230989  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.236299  585014 ssh_runner.go:195] Run: which crictl
	I1008 19:12:14.240914  585014 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:14.240940  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:12:14.282289  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282488  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:14.282643  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:14.282824  585014 logs.go:138] Found kubelet problem: Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:14.315207  585014 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:14.315235  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:14.433616  585014 logs.go:123] Gathering logs for etcd [ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f] ...
	I1008 19:12:14.433647  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef34a632006c757b1e20e85039b07d7674e05dcdd56c691f93316b2c04ca533f"
	I1008 19:12:14.482640  585014 logs.go:123] Gathering logs for coredns [b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5] ...
	I1008 19:12:14.482685  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4aceabf5c4e9bfa48a3db9cd428145d70c50724eecfe211ef08495a266e7bd5"
	I1008 19:12:14.524749  585014 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:14.524788  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:14.979562  585014 logs.go:123] Gathering logs for storage-provisioner [6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e] ...
	I1008 19:12:14.979629  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e05aeedd245a45d5deaee82796add9f61cd0a341f2afaae4578d4a77197245e"
	I1008 19:12:15.016898  585014 logs.go:123] Gathering logs for storage-provisioner [ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda] ...
	I1008 19:12:15.016941  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa903de853fd738822015d4f220c1e1da67e738cf26f6f66c0d49331d7fcfda"
	I1008 19:12:15.058447  585014 logs.go:123] Gathering logs for container status ...
	I1008 19:12:15.058478  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:15.114345  585014 logs.go:123] Gathering logs for dmesg ...
	I1008 19:12:15.114384  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:12:15.128920  585014 logs.go:123] Gathering logs for kube-apiserver [8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f] ...
	I1008 19:12:15.128948  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8355c440ac92939674c6b02ace136bb8971783539ad0e5c2c4fbab5146153d1f"
	I1008 19:12:15.176775  585014 logs.go:123] Gathering logs for kube-scheduler [639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f] ...
	I1008 19:12:15.176817  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 639ce8bca3484a01ae0517cbb583749116f7e352e59731d0b542b2adb779fe3f"
	I1008 19:12:15.215091  585014 logs.go:123] Gathering logs for kube-proxy [44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2] ...
	I1008 19:12:15.215129  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cb46dbe3fe0b3585e5c107a34935878eed90697b286cf084432648bcf868e2"
	I1008 19:12:15.256687  585014 logs.go:123] Gathering logs for kube-controller-manager [2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674] ...
	I1008 19:12:15.256731  585014 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a7606685755cd2d3426dcab26892dbd1ca28afa2dec0bce596c5a2afd7e3674"
	I1008 19:12:15.311551  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311583  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 19:12:15.311641  585014 out.go:270] X Problems detected in kubelet:
	W1008 19:12:15.311653  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.497785     901 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311664  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.497941     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	W1008 19:12:15.311676  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: W1008 19:07:33.505996     901 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:embed-certs-783146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-783146' and this object
	W1008 19:12:15.311681  585014 out.go:270]   Oct 08 19:07:33 embed-certs-783146 kubelet[901]: E1008 19:07:33.506101     901 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-783146\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-783146' and this object" logger="UnhandledError"
	I1008 19:12:15.311687  585014 out.go:358] Setting ErrFile to fd 2...
	I1008 19:12:15.311695  585014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 19:12:15.654850  585386 out.go:235]   - Booting up control plane ...
	I1008 19:12:15.654984  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:15.661461  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:15.662847  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:15.663628  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:15.666409  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:12:16.682464  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:19.182595  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:21.184074  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:23.682867  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:25.319305  585014 system_pods.go:59] 8 kube-system pods found
	I1008 19:12:25.319336  585014 system_pods.go:61] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.319340  585014 system_pods.go:61] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.319344  585014 system_pods.go:61] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.319348  585014 system_pods.go:61] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.319351  585014 system_pods.go:61] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.319354  585014 system_pods.go:61] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.319362  585014 system_pods.go:61] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.319365  585014 system_pods.go:61] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.319371  585014 system_pods.go:74] duration metric: took 11.400819931s to wait for pod list to return data ...
	I1008 19:12:25.319378  585014 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:12:25.322115  585014 default_sa.go:45] found service account: "default"
	I1008 19:12:25.322135  585014 default_sa.go:55] duration metric: took 2.751457ms for default service account to be created ...
	I1008 19:12:25.322143  585014 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:12:25.326570  585014 system_pods.go:86] 8 kube-system pods found
	I1008 19:12:25.326590  585014 system_pods.go:89] "coredns-7c65d6cfc9-kh9nk" [4fcd8158-57cf-4f5e-9be7-55c1107bf3b0] Running
	I1008 19:12:25.326595  585014 system_pods.go:89] "etcd-embed-certs-783146" [5d126735-4f89-471c-aa3d-34a2262020bd] Running
	I1008 19:12:25.326599  585014 system_pods.go:89] "kube-apiserver-embed-certs-783146" [fa49a0e3-e94a-4782-95d3-433431c338d3] Running
	I1008 19:12:25.326604  585014 system_pods.go:89] "kube-controller-manager-embed-certs-783146" [a274e17b-b8e1-44b1-a052-ff7e11289729] Running
	I1008 19:12:25.326610  585014 system_pods.go:89] "kube-proxy-9l7t7" [20a17c15-0fd2-40e8-b42a-ce35d2fbdf6d] Running
	I1008 19:12:25.326615  585014 system_pods.go:89] "kube-scheduler-embed-certs-783146" [84b6f62f-a0a1-4b21-9544-f7ef964f1faf] Running
	I1008 19:12:25.326625  585014 system_pods.go:89] "metrics-server-6867b74b74-4d48d" [7d305dc9-31d0-482b-8b3e-82be14daeaf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:12:25.326633  585014 system_pods.go:89] "storage-provisioner" [2ad6a8a6-5f69-4323-b540-2f8d330d8d84] Running
	I1008 19:12:25.326642  585014 system_pods.go:126] duration metric: took 4.494323ms to wait for k8s-apps to be running ...
	I1008 19:12:25.326651  585014 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:12:25.326701  585014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:25.344597  585014 system_svc.go:56] duration metric: took 17.941012ms WaitForService to wait for kubelet
	I1008 19:12:25.344621  585014 kubeadm.go:582] duration metric: took 4m49.072648847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:12:25.344638  585014 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:12:25.347385  585014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:12:25.347404  585014 node_conditions.go:123] node cpu capacity is 2
	I1008 19:12:25.347425  585014 node_conditions.go:105] duration metric: took 2.783181ms to run NodePressure ...
	I1008 19:12:25.347437  585014 start.go:241] waiting for startup goroutines ...
	I1008 19:12:25.347450  585014 start.go:246] waiting for cluster config update ...
	I1008 19:12:25.347463  585014 start.go:255] writing updated cluster config ...
	I1008 19:12:25.347823  585014 ssh_runner.go:195] Run: rm -f paused
	I1008 19:12:25.395903  585014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:12:25.397911  585014 out.go:177] * Done! kubectl is now configured to use "embed-certs-783146" cluster and "default" namespace by default
	I1008 19:12:25.683645  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:28.182995  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:30.183567  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:32.682881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.013046  585096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.145916528s)
	I1008 19:12:37.013156  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:12:37.028010  585096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 19:12:37.037493  585096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:12:37.046435  585096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:12:37.046455  585096 kubeadm.go:157] found existing configuration files:
	
	I1008 19:12:37.046495  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 19:12:37.055422  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:12:37.055482  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:12:37.064538  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 19:12:37.072968  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:12:37.073021  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:12:37.081754  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.090143  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:12:37.090179  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:12:37.098726  585096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 19:12:37.107261  585096 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:12:37.107308  585096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:12:37.115975  585096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:12:37.163570  585096 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1008 19:12:37.163642  585096 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:12:37.272891  585096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:12:37.273025  585096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:12:37.273151  585096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 19:12:37.284204  585096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:12:37.286084  585096 out.go:235]   - Generating certificates and keys ...
	I1008 19:12:37.286175  585096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:12:37.286263  585096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:12:37.286385  585096 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:12:37.286443  585096 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:12:37.286545  585096 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:12:37.286638  585096 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:12:37.286729  585096 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:12:37.286812  585096 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:12:37.286912  585096 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:12:37.287010  585096 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:12:37.287082  585096 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:12:37.287172  585096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:12:37.602946  585096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:12:37.727897  585096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 19:12:37.932126  585096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:12:37.989742  585096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:12:38.036655  585096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:12:38.037085  585096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:12:38.040618  585096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:12:35.182881  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:37.683718  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:38.042238  585096 out.go:235]   - Booting up control plane ...
	I1008 19:12:38.042374  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:12:38.042568  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:12:38.043504  585096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:12:38.065666  585096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:12:38.071727  585096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:12:38.071814  585096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:12:38.210382  585096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 19:12:38.210516  585096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 19:12:39.213697  585096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003319891s
	I1008 19:12:39.213803  585096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1008 19:12:43.717718  585096 kubeadm.go:310] [api-check] The API server is healthy after 4.502167036s
	I1008 19:12:43.728628  585096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 19:12:43.744283  585096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 19:12:43.775369  585096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 19:12:43.775621  585096 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-142496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 19:12:43.788583  585096 kubeadm.go:310] [bootstrap-token] Using token: srsq4v.7le212xun40ljc7w
	I1008 19:12:39.684554  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:42.183680  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:44.185065  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:43.789834  585096 out.go:235]   - Configuring RBAC rules ...
	I1008 19:12:43.789945  585096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 19:12:43.796091  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 19:12:43.807906  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 19:12:43.811025  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 19:12:43.814445  585096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 19:12:43.817615  585096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 19:12:44.122839  585096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 19:12:44.567387  585096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 19:12:45.122714  585096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 19:12:45.123480  585096 kubeadm.go:310] 
	I1008 19:12:45.123590  585096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 19:12:45.123617  585096 kubeadm.go:310] 
	I1008 19:12:45.123740  585096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 19:12:45.123749  585096 kubeadm.go:310] 
	I1008 19:12:45.123789  585096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 19:12:45.123870  585096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 19:12:45.123958  585096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 19:12:45.123984  585096 kubeadm.go:310] 
	I1008 19:12:45.124064  585096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 19:12:45.124080  585096 kubeadm.go:310] 
	I1008 19:12:45.124152  585096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 19:12:45.124162  585096 kubeadm.go:310] 
	I1008 19:12:45.124248  585096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 19:12:45.124366  585096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 19:12:45.124456  585096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 19:12:45.124469  585096 kubeadm.go:310] 
	I1008 19:12:45.124579  585096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 19:12:45.124682  585096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 19:12:45.124692  585096 kubeadm.go:310] 
	I1008 19:12:45.124804  585096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.124926  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d \
	I1008 19:12:45.124953  585096 kubeadm.go:310] 	--control-plane 
	I1008 19:12:45.124958  585096 kubeadm.go:310] 
	I1008 19:12:45.125086  585096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 19:12:45.125093  585096 kubeadm.go:310] 
	I1008 19:12:45.125182  585096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token srsq4v.7le212xun40ljc7w \
	I1008 19:12:45.125321  585096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:abb84a8a4c2843bd928de69749185c3bf514b36073bae93a45e0d8683d59797d 
	I1008 19:12:45.126852  585096 kubeadm.go:310] W1008 19:12:37.105673    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127231  585096 kubeadm.go:310] W1008 19:12:37.106373    2530 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1008 19:12:45.127380  585096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:12:45.127429  585096 cni.go:84] Creating CNI manager for ""
	I1008 19:12:45.127452  585096 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 19:12:45.129742  585096 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 19:12:45.130870  585096 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 19:12:45.143909  585096 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 19:12:45.170901  585096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 19:12:45.170965  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:45.170972  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-142496 minikube.k8s.io/updated_at=2024_10_08T19_12_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=default-k8s-diff-port-142496 minikube.k8s.io/primary=true
	I1008 19:12:45.198031  585096 ops.go:34] apiserver oom_adj: -16
	I1008 19:12:45.385789  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.684251  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:49.183225  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:45.886434  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.386165  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:46.886920  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.386786  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:47.885835  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.386706  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:48.885981  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.386856  585096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 19:12:49.471554  585096 kubeadm.go:1113] duration metric: took 4.300656747s to wait for elevateKubeSystemPrivileges
	I1008 19:12:49.471596  585096 kubeadm.go:394] duration metric: took 5m2.486064826s to StartCluster
	I1008 19:12:49.471627  585096 settings.go:142] acquiring lock: {Name:mk3c9bea822cc51a88bd25e2b522a65195bbd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.471736  585096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 19:12:49.473381  585096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19774-529764/kubeconfig: {Name:mk4f9e93fa27cb28f5eb850a3f9af39c213f60d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 19:12:49.473676  585096 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 19:12:49.473768  585096 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 19:12:49.473874  585096 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473897  585096 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142496"
	I1008 19:12:49.473899  585096 config.go:182] Loaded profile config "default-k8s-diff-port-142496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 19:12:49.473904  585096 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473923  585096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142496"
	W1008 19:12:49.473907  585096 addons.go:243] addon storage-provisioner should already be in state true
	I1008 19:12:49.473939  585096 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142496"
	I1008 19:12:49.473955  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.473967  585096 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.473981  585096 addons.go:243] addon metrics-server should already be in state true
	I1008 19:12:49.474022  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.474283  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474313  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474338  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474366  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.474373  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.474405  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.475217  585096 out.go:177] * Verifying Kubernetes components...
	I1008 19:12:49.476402  585096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 19:12:49.490880  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1008 19:12:49.491405  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.492070  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.492093  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.492454  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.492990  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.493040  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.493623  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 19:12:49.493646  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I1008 19:12:49.494011  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494067  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.494548  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494565  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494763  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.494790  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.494930  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495102  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.495276  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.495871  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.495908  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.498744  585096 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142496"
	W1008 19:12:49.498764  585096 addons.go:243] addon default-storageclass should already be in state true
	I1008 19:12:49.498787  585096 host.go:66] Checking if "default-k8s-diff-port-142496" exists ...
	I1008 19:12:49.499142  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.499173  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.514047  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I1008 19:12:49.514527  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.515028  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.515046  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.515493  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.515662  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.516519  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1008 19:12:49.517015  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.517643  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.517661  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.517706  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.517757  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I1008 19:12:49.518133  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.518458  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.518617  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.518643  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.518681  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.519107  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.519527  585096 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1008 19:12:49.519808  585096 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19774-529764/.minikube/bin/docker-machine-driver-kvm2
	I1008 19:12:49.519923  585096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 19:12:49.520415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.520624  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 19:12:49.520644  585096 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 19:12:49.520669  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.522226  585096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 19:12:49.523372  585096 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.523396  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 19:12:49.523415  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.523947  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524437  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.524464  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.524651  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.524830  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.525042  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.525198  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.527349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527670  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.527693  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.527842  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.528009  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.528186  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.528325  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.536509  585096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I1008 19:12:49.536879  585096 main.go:141] libmachine: () Calling .GetVersion
	I1008 19:12:49.537341  585096 main.go:141] libmachine: Using API Version  1
	I1008 19:12:49.537359  585096 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 19:12:49.537606  585096 main.go:141] libmachine: () Calling .GetMachineName
	I1008 19:12:49.537897  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetState
	I1008 19:12:49.539570  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .DriverName
	I1008 19:12:49.539810  585096 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.539831  585096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 19:12:49.539848  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHHostname
	I1008 19:12:49.542955  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543349  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:28:c1", ip: ""} in network mk-default-k8s-diff-port-142496: {Iface:virbr2 ExpiryTime:2024-10-08 20:07:32 +0000 UTC Type:0 Mac:52:54:00:14:28:c1 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:default-k8s-diff-port-142496 Clientid:01:52:54:00:14:28:c1}
	I1008 19:12:49.543522  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | domain default-k8s-diff-port-142496 has defined IP address 192.168.50.213 and MAC address 52:54:00:14:28:c1 in network mk-default-k8s-diff-port-142496
	I1008 19:12:49.543543  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHPort
	I1008 19:12:49.543726  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHKeyPath
	I1008 19:12:49.543888  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .GetSSHUsername
	I1008 19:12:49.544023  585096 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/default-k8s-diff-port-142496/id_rsa Username:docker}
	I1008 19:12:49.721845  585096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 19:12:49.741622  585096 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.763968  585096 node_ready.go:49] node "default-k8s-diff-port-142496" has status "Ready":"True"
	I1008 19:12:49.764005  585096 node_ready.go:38] duration metric: took 22.348135ms for node "default-k8s-diff-port-142496" to be "Ready" ...
	I1008 19:12:49.764019  585096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:49.793150  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:49.867565  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 19:12:49.904041  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 19:12:49.912694  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 19:12:49.912723  585096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1008 19:12:49.962053  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 19:12:49.962082  585096 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 19:12:50.004678  585096 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.004709  585096 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 19:12:50.068528  585096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 19:12:50.394807  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394824  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.394836  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.394841  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395140  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395161  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395172  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395181  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395181  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395195  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395201  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.395205  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.395262  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.395425  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395439  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395616  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.395668  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.395643  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416509  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.416532  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.416815  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.416865  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.416880  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634404  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634428  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634722  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.634744  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.634752  585096 main.go:141] libmachine: Making call to close driver server
	I1008 19:12:50.634761  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) Calling .Close
	I1008 19:12:50.634769  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635036  585096 main.go:141] libmachine: (default-k8s-diff-port-142496) DBG | Closing plugin on server side
	I1008 19:12:50.635066  585096 main.go:141] libmachine: Successfully made call to close driver server
	I1008 19:12:50.635079  585096 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 19:12:50.635100  585096 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-142496"
	I1008 19:12:50.636555  585096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1008 19:12:51.683959  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.182376  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:50.637816  585096 addons.go:510] duration metric: took 1.164063633s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1008 19:12:51.799881  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:54.299619  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:55.665398  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:12:55.666338  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:12:55.666544  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:56.183179  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683102  584371 pod_ready.go:103] pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.683159  584371 pod_ready.go:82] duration metric: took 4m0.006623922s for pod "metrics-server-6867b74b74-rlt25" in "kube-system" namespace to be "Ready" ...
	E1008 19:12:58.683173  584371 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1008 19:12:58.683184  584371 pod_ready.go:39] duration metric: took 4m4.541923995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:12:58.683207  584371 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:12:58.683245  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:12:58.683296  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:12:58.729385  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:58.729407  584371 cri.go:89] found id: ""
	I1008 19:12:58.729417  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:12:58.729472  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.734291  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:12:58.734382  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:12:58.772015  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:12:58.772050  584371 cri.go:89] found id: ""
	I1008 19:12:58.772062  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:12:58.772123  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.776231  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:12:58.776300  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:12:58.812962  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:58.812982  584371 cri.go:89] found id: ""
	I1008 19:12:58.812991  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:12:58.813046  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.816951  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:12:58.817002  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:12:58.852918  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:58.852939  584371 cri.go:89] found id: ""
	I1008 19:12:58.852946  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:12:58.852992  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.857184  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:12:58.857245  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:12:58.895233  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:12:58.895254  584371 cri.go:89] found id: ""
	I1008 19:12:58.895264  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:12:58.895317  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.899301  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:12:58.899354  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:12:58.933918  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:58.933946  584371 cri.go:89] found id: ""
	I1008 19:12:58.933956  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:12:58.934003  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:58.938274  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:12:58.938361  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:12:58.980067  584371 cri.go:89] found id: ""
	I1008 19:12:58.980094  584371 logs.go:282] 0 containers: []
	W1008 19:12:58.980104  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:12:58.980113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:12:58.980174  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:12:59.013783  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:12:59.013812  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.013817  584371 cri.go:89] found id: ""
	I1008 19:12:59.013827  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:12:59.013886  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.018420  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:12:59.024462  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:12:59.024486  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:12:59.062654  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:12:59.062688  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:12:59.110932  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:12:59.110966  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:12:59.248699  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:12:59.248734  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:12:59.294439  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:12:59.294473  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:12:59.331208  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:12:59.331241  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:12:59.374242  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:12:59.374283  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:12:56.799487  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:58.800290  585096 pod_ready.go:103] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"False"
	I1008 19:12:59.800320  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.800349  585096 pod_ready.go:82] duration metric: took 10.007162242s for pod "coredns-7c65d6cfc9-wrz7s" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.800361  585096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804590  585096 pod_ready.go:93] pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.804609  585096 pod_ready.go:82] duration metric: took 4.240474ms for pod "coredns-7c65d6cfc9-x4j67" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.804620  585096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808737  585096 pod_ready.go:93] pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.808754  585096 pod_ready.go:82] duration metric: took 4.127686ms for pod "etcd-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.808762  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813126  585096 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.813146  585096 pod_ready.go:82] duration metric: took 4.37796ms for pod "kube-apiserver-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.813154  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817020  585096 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:12:59.817039  585096 pod_ready.go:82] duration metric: took 3.878053ms for pod "kube-controller-manager-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:12:59.817048  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197958  585096 pod_ready.go:93] pod "kube-proxy-wd5kv" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.197983  585096 pod_ready.go:82] duration metric: took 380.928087ms for pod "kube-proxy-wd5kv" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.197992  585096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597495  585096 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace has status "Ready":"True"
	I1008 19:13:00.597521  585096 pod_ready.go:82] duration metric: took 399.522182ms for pod "kube-scheduler-default-k8s-diff-port-142496" in "kube-system" namespace to be "Ready" ...
	I1008 19:13:00.597529  585096 pod_ready.go:39] duration metric: took 10.833495765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1008 19:13:00.597545  585096 api_server.go:52] waiting for apiserver process to appear ...
	I1008 19:13:00.597612  585096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:00.613266  585096 api_server.go:72] duration metric: took 11.139554705s to wait for apiserver process to appear ...
	I1008 19:13:00.613289  585096 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:00.613308  585096 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8444/healthz ...
	I1008 19:13:00.618420  585096 api_server.go:279] https://192.168.50.213:8444/healthz returned 200:
	ok
	I1008 19:13:00.619376  585096 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:00.619399  585096 api_server.go:131] duration metric: took 6.102941ms to wait for apiserver health ...
	I1008 19:13:00.619407  585096 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:00.800687  585096 system_pods.go:59] 9 kube-system pods found
	I1008 19:13:00.800720  585096 system_pods.go:61] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:00.800729  585096 system_pods.go:61] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:00.800733  585096 system_pods.go:61] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:00.800737  585096 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:00.800740  585096 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:00.800743  585096 system_pods.go:61] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:00.800747  585096 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:00.800752  585096 system_pods.go:61] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:00.800755  585096 system_pods.go:61] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:00.800765  585096 system_pods.go:74] duration metric: took 181.352111ms to wait for pod list to return data ...
	I1008 19:13:00.800773  585096 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:00.997631  585096 default_sa.go:45] found service account: "default"
	I1008 19:13:00.997657  585096 default_sa.go:55] duration metric: took 196.876434ms for default service account to be created ...
	I1008 19:13:00.997667  585096 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:01.199366  585096 system_pods.go:86] 9 kube-system pods found
	I1008 19:13:01.199396  585096 system_pods.go:89] "coredns-7c65d6cfc9-wrz7s" [e441884e-7c57-4a73-86bb-c46629d2eda6] Running
	I1008 19:13:01.199402  585096 system_pods.go:89] "coredns-7c65d6cfc9-x4j67" [89141081-eb1e-466a-913d-597e8df02125] Running
	I1008 19:13:01.199406  585096 system_pods.go:89] "etcd-default-k8s-diff-port-142496" [f6dfe1de-a197-4f22-aca2-9b3b059d3a33] Running
	I1008 19:13:01.199409  585096 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142496" [2b80bd98-cc13-4c53-9080-bde721a119ca] Running
	I1008 19:13:01.199413  585096 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142496" [dfb6bc3c-9205-4de2-a8d2-8300ee38ec4d] Running
	I1008 19:13:01.199416  585096 system_pods.go:89] "kube-proxy-wd5kv" [714118a5-ec5d-448c-ad63-7f0303d00eb0] Running
	I1008 19:13:01.199419  585096 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142496" [c5549d4b-23f3-4d12-a69e-231e5be4a98f] Running
	I1008 19:13:01.199426  585096 system_pods.go:89] "metrics-server-6867b74b74-wvh5g" [99dacec0-80f9-4662-bbea-6191aa9b62d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:01.199430  585096 system_pods.go:89] "storage-provisioner" [c3c57b3f-59d9-49bb-ba82-caee6af45bde] Running
	I1008 19:13:01.199439  585096 system_pods.go:126] duration metric: took 201.766214ms to wait for k8s-apps to be running ...
	I1008 19:13:01.199447  585096 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:01.199492  585096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:01.214863  585096 system_svc.go:56] duration metric: took 15.401989ms WaitForService to wait for kubelet
	I1008 19:13:01.214895  585096 kubeadm.go:582] duration metric: took 11.741185862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:01.214919  585096 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:01.397506  585096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:01.397530  585096 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:01.397541  585096 node_conditions.go:105] duration metric: took 182.616774ms to run NodePressure ...
	I1008 19:13:01.397553  585096 start.go:241] waiting for startup goroutines ...
	I1008 19:13:01.397560  585096 start.go:246] waiting for cluster config update ...
	I1008 19:13:01.397570  585096 start.go:255] writing updated cluster config ...
	I1008 19:13:01.397828  585096 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:01.448158  585096 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:01.450201  585096 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142496" cluster and "default" namespace by default
	I1008 19:13:00.666971  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:00.667239  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:12:59.438777  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:12:59.438814  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:12:59.945253  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:12:59.945302  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:00.016570  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:00.016607  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:00.034150  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:00.034183  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:00.075423  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:00.075456  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:00.111132  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:00.111164  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.646570  584371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 19:13:02.666594  584371 api_server.go:72] duration metric: took 4m13.762192057s to wait for apiserver process to appear ...
	I1008 19:13:02.666620  584371 api_server.go:88] waiting for apiserver healthz status ...
	I1008 19:13:02.666663  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:02.666718  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:02.704214  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:02.704242  584371 cri.go:89] found id: ""
	I1008 19:13:02.704250  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:02.704298  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.708636  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:02.708717  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:02.748418  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:02.748444  584371 cri.go:89] found id: ""
	I1008 19:13:02.748455  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:02.748515  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.753267  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:02.753332  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:02.790534  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:02.790562  584371 cri.go:89] found id: ""
	I1008 19:13:02.790571  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:02.790636  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.794880  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:02.794950  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:02.834754  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:02.834774  584371 cri.go:89] found id: ""
	I1008 19:13:02.834781  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:02.834830  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.839391  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:02.839463  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:02.878344  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:02.878371  584371 cri.go:89] found id: ""
	I1008 19:13:02.878380  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:02.878425  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.882939  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:02.883025  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:02.920081  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:02.920104  584371 cri.go:89] found id: ""
	I1008 19:13:02.920112  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:02.920168  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:02.924141  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:02.924205  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:02.959700  584371 cri.go:89] found id: ""
	I1008 19:13:02.959730  584371 logs.go:282] 0 containers: []
	W1008 19:13:02.959741  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:02.959750  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:02.959822  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:02.996900  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:02.996927  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:02.996933  584371 cri.go:89] found id: ""
	I1008 19:13:02.996940  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:02.996989  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.001152  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:03.005021  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:03.005046  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:03.069775  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:03.069813  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:03.120028  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:03.120060  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:03.155756  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:03.155784  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:03.195587  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:03.195624  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:03.231844  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:03.231875  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:03.271156  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:03.271187  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:03.286994  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:03.287017  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:03.397237  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:03.397269  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:03.442373  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:03.442407  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:03.500191  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:03.500222  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:03.535448  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:03.535490  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:03.966382  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:03.966425  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:06.513885  584371 api_server.go:253] Checking apiserver healthz at https://192.168.61.141:8443/healthz ...
	I1008 19:13:06.518111  584371 api_server.go:279] https://192.168.61.141:8443/healthz returned 200:
	ok
	I1008 19:13:06.519310  584371 api_server.go:141] control plane version: v1.31.1
	I1008 19:13:06.519331  584371 api_server.go:131] duration metric: took 3.852704338s to wait for apiserver health ...
	I1008 19:13:06.519341  584371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 19:13:06.519370  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:13:06.519417  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:13:06.558940  584371 cri.go:89] found id: "ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:06.558965  584371 cri.go:89] found id: ""
	I1008 19:13:06.558979  584371 logs.go:282] 1 containers: [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005]
	I1008 19:13:06.559029  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.563471  584371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:13:06.563537  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:13:06.607844  584371 cri.go:89] found id: "c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:06.607873  584371 cri.go:89] found id: ""
	I1008 19:13:06.607883  584371 logs.go:282] 1 containers: [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af]
	I1008 19:13:06.607944  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.612399  584371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:13:06.612456  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:13:06.645502  584371 cri.go:89] found id: "09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:06.645521  584371 cri.go:89] found id: ""
	I1008 19:13:06.645528  584371 logs.go:282] 1 containers: [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789]
	I1008 19:13:06.645575  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.649442  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:13:06.649519  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:13:06.685085  584371 cri.go:89] found id: "51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:06.685114  584371 cri.go:89] found id: ""
	I1008 19:13:06.685126  584371 logs.go:282] 1 containers: [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e]
	I1008 19:13:06.685183  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.689859  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:13:06.689935  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:13:06.724775  584371 cri.go:89] found id: "f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:06.724803  584371 cri.go:89] found id: ""
	I1008 19:13:06.724814  584371 logs.go:282] 1 containers: [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8]
	I1008 19:13:06.724873  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.729489  584371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:13:06.729542  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:13:06.776599  584371 cri.go:89] found id: "d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:06.776626  584371 cri.go:89] found id: ""
	I1008 19:13:06.776636  584371 logs.go:282] 1 containers: [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59]
	I1008 19:13:06.776704  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.780790  584371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:13:06.780863  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:13:06.817072  584371 cri.go:89] found id: ""
	I1008 19:13:06.817097  584371 logs.go:282] 0 containers: []
	W1008 19:13:06.817106  584371 logs.go:284] No container was found matching "kindnet"
	I1008 19:13:06.817113  584371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1008 19:13:06.817171  584371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1008 19:13:06.855429  584371 cri.go:89] found id: "f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:06.855453  584371 cri.go:89] found id: "035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:06.855457  584371 cri.go:89] found id: ""
	I1008 19:13:06.855465  584371 logs.go:282] 2 containers: [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27]
	I1008 19:13:06.855520  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.859774  584371 ssh_runner.go:195] Run: which crictl
	I1008 19:13:06.863800  584371 logs.go:123] Gathering logs for kubelet ...
	I1008 19:13:06.863821  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 19:13:06.931413  584371 logs.go:123] Gathering logs for dmesg ...
	I1008 19:13:06.931443  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:13:06.946213  584371 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:13:06.946236  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 19:13:07.070604  584371 logs.go:123] Gathering logs for kube-apiserver [ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005] ...
	I1008 19:13:07.070640  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebd3d4cf5921485c8e16e692aac975c6c53d4d1a4f79d5e52f514e90c5c47005"
	I1008 19:13:07.114749  584371 logs.go:123] Gathering logs for coredns [09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789] ...
	I1008 19:13:07.114782  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09475152f3f1b6327efe3c6614b4c176b4dc338019f0af0fce55447f4de5e789"
	I1008 19:13:07.152555  584371 logs.go:123] Gathering logs for kube-proxy [f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8] ...
	I1008 19:13:07.152584  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1591b11958e9ae2ac99c4bba21321fdce502e17d212e279dddc2cbfba7ed7b8"
	I1008 19:13:07.192730  584371 logs.go:123] Gathering logs for kube-controller-manager [d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59] ...
	I1008 19:13:07.192759  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d97350daf0186e9ddce10abe810c3c055b8da99fcd5ae1dc0f06729ae39a7c59"
	I1008 19:13:07.242001  584371 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:13:07.242036  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:13:07.612662  584371 logs.go:123] Gathering logs for etcd [c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af] ...
	I1008 19:13:07.612714  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8765b4e849e7e467a100350f67354fb09e73d0912cec4a06171789a8fa1d8af"
	I1008 19:13:07.656655  584371 logs.go:123] Gathering logs for kube-scheduler [51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e] ...
	I1008 19:13:07.656700  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51e1de45365e826eea898cc3d5f3ca124f9c9d8e16e196538db34f0b08c9cd9e"
	I1008 19:13:07.695462  584371 logs.go:123] Gathering logs for storage-provisioner [f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa] ...
	I1008 19:13:07.695494  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f17c1063782282c0c62aa214112d9836f9a926f67aeb0bd261d50b52befaa3fa"
	I1008 19:13:07.733107  584371 logs.go:123] Gathering logs for storage-provisioner [035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27] ...
	I1008 19:13:07.733143  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035c2e708170eaee7ff6414fb7f9b1946cfda6eabd6225abf977031a13efbb27"
	I1008 19:13:07.779348  584371 logs.go:123] Gathering logs for container status ...
	I1008 19:13:07.779382  584371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:13:10.325584  584371 system_pods.go:59] 8 kube-system pods found
	I1008 19:13:10.325616  584371 system_pods.go:61] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.325620  584371 system_pods.go:61] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.325624  584371 system_pods.go:61] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.325628  584371 system_pods.go:61] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.325631  584371 system_pods.go:61] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.325634  584371 system_pods.go:61] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.325639  584371 system_pods.go:61] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.325644  584371 system_pods.go:61] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.325651  584371 system_pods.go:74] duration metric: took 3.806304739s to wait for pod list to return data ...
	I1008 19:13:10.325659  584371 default_sa.go:34] waiting for default service account to be created ...
	I1008 19:13:10.328062  584371 default_sa.go:45] found service account: "default"
	I1008 19:13:10.328082  584371 default_sa.go:55] duration metric: took 2.41797ms for default service account to be created ...
	I1008 19:13:10.328089  584371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 19:13:10.332201  584371 system_pods.go:86] 8 kube-system pods found
	I1008 19:13:10.332224  584371 system_pods.go:89] "coredns-7c65d6cfc9-r8qft" [585e6c86-8ece-4a3e-af02-7bb0a97063be] Running
	I1008 19:13:10.332229  584371 system_pods.go:89] "etcd-no-preload-966632" [c2e9a777-9be6-408f-8b09-6fccfd32f4ee] Running
	I1008 19:13:10.332233  584371 system_pods.go:89] "kube-apiserver-no-preload-966632" [7492c882-9b78-4e9e-9ff7-918cb5effab3] Running
	I1008 19:13:10.332237  584371 system_pods.go:89] "kube-controller-manager-no-preload-966632" [87edc418-9f67-43f8-80b9-679d237380bb] Running
	I1008 19:13:10.332241  584371 system_pods.go:89] "kube-proxy-qpnvm" [37c3de1b-a732-4c1b-b9cb-8c6fcd833717] Running
	I1008 19:13:10.332245  584371 system_pods.go:89] "kube-scheduler-no-preload-966632" [32a4d2ac-84a3-451f-ab87-a23f1e3ef056] Running
	I1008 19:13:10.332250  584371 system_pods.go:89] "metrics-server-6867b74b74-rlt25" [f89db6b4-a0fd-43c3-a2ba-65d8c2de3617] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 19:13:10.332254  584371 system_pods.go:89] "storage-provisioner" [c664c1f1-4350-423c-bd19-9e64e9efab2e] Running
	I1008 19:13:10.332261  584371 system_pods.go:126] duration metric: took 4.167739ms to wait for k8s-apps to be running ...
	I1008 19:13:10.332270  584371 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 19:13:10.332313  584371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:13:10.350257  584371 system_svc.go:56] duration metric: took 17.979349ms WaitForService to wait for kubelet
	I1008 19:13:10.350288  584371 kubeadm.go:582] duration metric: took 4m21.445892386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 19:13:10.350310  584371 node_conditions.go:102] verifying NodePressure condition ...
	I1008 19:13:10.352582  584371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 19:13:10.352598  584371 node_conditions.go:123] node cpu capacity is 2
	I1008 19:13:10.352609  584371 node_conditions.go:105] duration metric: took 2.294326ms to run NodePressure ...
	I1008 19:13:10.352620  584371 start.go:241] waiting for startup goroutines ...
	I1008 19:13:10.352626  584371 start.go:246] waiting for cluster config update ...
	I1008 19:13:10.352636  584371 start.go:255] writing updated cluster config ...
	I1008 19:13:10.352882  584371 ssh_runner.go:195] Run: rm -f paused
	I1008 19:13:10.401998  584371 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1008 19:13:10.404037  584371 out.go:177] * Done! kubectl is now configured to use "no-preload-966632" cluster and "default" namespace by default
	I1008 19:13:10.667801  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:10.668103  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:13:30.668484  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:13:30.668799  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669570  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:10.669859  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:10.669869  585386 kubeadm.go:310] 
	I1008 19:14:10.669920  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:14:10.669995  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:14:10.670019  585386 kubeadm.go:310] 
	I1008 19:14:10.670071  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:14:10.670121  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:14:10.670251  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:14:10.670260  585386 kubeadm.go:310] 
	I1008 19:14:10.670423  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:14:10.670498  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:14:10.670551  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:14:10.670558  585386 kubeadm.go:310] 
	I1008 19:14:10.670702  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:14:10.670819  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:14:10.670830  585386 kubeadm.go:310] 
	I1008 19:14:10.670988  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:14:10.671103  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:14:10.671236  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:14:10.671343  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:14:10.671357  585386 kubeadm.go:310] 
	I1008 19:14:10.672523  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:14:10.672632  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:14:10.672726  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1008 19:14:10.672874  585386 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 19:14:10.672936  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 19:14:11.145922  585386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 19:14:11.161774  585386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 19:14:11.172223  585386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 19:14:11.172256  585386 kubeadm.go:157] found existing configuration files:
	
	I1008 19:14:11.172309  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 19:14:11.182399  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 19:14:11.182453  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 19:14:11.191984  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 19:14:11.201534  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 19:14:11.201596  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 19:14:11.211292  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.220605  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 19:14:11.220662  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 19:14:11.231345  585386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 19:14:11.241183  585386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 19:14:11.241243  585386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 19:14:11.250870  585386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 19:14:11.318814  585386 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1008 19:14:11.318930  585386 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 19:14:11.458843  585386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 19:14:11.458994  585386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 19:14:11.459125  585386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 19:14:11.630763  585386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 19:14:11.632916  585386 out.go:235]   - Generating certificates and keys ...
	I1008 19:14:11.633031  585386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 19:14:11.633137  585386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 19:14:11.633246  585386 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 19:14:11.633332  585386 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 19:14:11.633426  585386 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 19:14:11.633503  585386 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 19:14:11.633608  585386 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 19:14:11.633677  585386 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 19:14:11.633954  585386 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 19:14:11.634773  585386 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 19:14:11.635047  585386 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 19:14:11.635133  585386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 19:14:12.370791  585386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 19:14:12.517416  585386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 19:14:12.600908  585386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 19:14:12.705806  585386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 19:14:12.728338  585386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 19:14:12.729652  585386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 19:14:12.729721  585386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 19:14:12.873126  585386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 19:14:12.875130  585386 out.go:235]   - Booting up control plane ...
	I1008 19:14:12.875257  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 19:14:12.881155  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 19:14:12.881265  585386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 19:14:12.881391  585386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 19:14:12.883968  585386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 19:14:52.886513  585386 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1008 19:14:52.886666  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:52.886935  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:14:57.887177  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:14:57.887390  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:07.888039  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:07.888254  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:15:27.889072  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:15:27.889373  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891253  585386 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1008 19:16:07.891548  585386 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1008 19:16:07.891562  585386 kubeadm.go:310] 
	I1008 19:16:07.891624  585386 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1008 19:16:07.891683  585386 kubeadm.go:310] 		timed out waiting for the condition
	I1008 19:16:07.891691  585386 kubeadm.go:310] 
	I1008 19:16:07.891744  585386 kubeadm.go:310] 	This error is likely caused by:
	I1008 19:16:07.891787  585386 kubeadm.go:310] 		- The kubelet is not running
	I1008 19:16:07.891914  585386 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1008 19:16:07.891931  585386 kubeadm.go:310] 
	I1008 19:16:07.892025  585386 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1008 19:16:07.892054  585386 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1008 19:16:07.892098  585386 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1008 19:16:07.892127  585386 kubeadm.go:310] 
	I1008 19:16:07.892240  585386 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1008 19:16:07.892348  585386 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 19:16:07.892360  585386 kubeadm.go:310] 
	I1008 19:16:07.892505  585386 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1008 19:16:07.892627  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 19:16:07.892722  585386 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1008 19:16:07.892846  585386 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1008 19:16:07.892870  585386 kubeadm.go:310] 
	I1008 19:16:07.893773  585386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 19:16:07.893901  585386 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1008 19:16:07.893995  585386 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1008 19:16:07.894186  585386 kubeadm.go:394] duration metric: took 7m58.068959565s to StartCluster
	I1008 19:16:07.894273  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 19:16:07.894380  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 19:16:07.941585  585386 cri.go:89] found id: ""
	I1008 19:16:07.941618  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.941629  585386 logs.go:284] No container was found matching "kube-apiserver"
	I1008 19:16:07.941635  585386 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 19:16:07.941701  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 19:16:07.979854  585386 cri.go:89] found id: ""
	I1008 19:16:07.979882  585386 logs.go:282] 0 containers: []
	W1008 19:16:07.979892  585386 logs.go:284] No container was found matching "etcd"
	I1008 19:16:07.979900  585386 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 19:16:07.979961  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 19:16:08.013599  585386 cri.go:89] found id: ""
	I1008 19:16:08.013631  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.013643  585386 logs.go:284] No container was found matching "coredns"
	I1008 19:16:08.013649  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 19:16:08.013709  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 19:16:08.045168  585386 cri.go:89] found id: ""
	I1008 19:16:08.045195  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.045204  585386 logs.go:284] No container was found matching "kube-scheduler"
	I1008 19:16:08.045210  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 19:16:08.045267  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 19:16:08.079052  585386 cri.go:89] found id: ""
	I1008 19:16:08.079080  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.079096  585386 logs.go:284] No container was found matching "kube-proxy"
	I1008 19:16:08.079104  585386 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 19:16:08.079159  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 19:16:08.113212  585386 cri.go:89] found id: ""
	I1008 19:16:08.113239  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.113248  585386 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 19:16:08.113254  585386 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 19:16:08.113316  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 19:16:08.146546  585386 cri.go:89] found id: ""
	I1008 19:16:08.146576  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.146586  585386 logs.go:284] No container was found matching "kindnet"
	I1008 19:16:08.146592  585386 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1008 19:16:08.146652  585386 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1008 19:16:08.180186  585386 cri.go:89] found id: ""
	I1008 19:16:08.180219  585386 logs.go:282] 0 containers: []
	W1008 19:16:08.180233  585386 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1008 19:16:08.180247  585386 logs.go:123] Gathering logs for dmesg ...
	I1008 19:16:08.180267  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 19:16:08.193463  585386 logs.go:123] Gathering logs for describe nodes ...
	I1008 19:16:08.193492  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 19:16:08.269950  585386 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 19:16:08.269976  585386 logs.go:123] Gathering logs for CRI-O ...
	I1008 19:16:08.269989  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 19:16:08.381506  585386 logs.go:123] Gathering logs for container status ...
	I1008 19:16:08.381560  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 19:16:08.432498  585386 logs.go:123] Gathering logs for kubelet ...
	I1008 19:16:08.432529  585386 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 19:16:08.485778  585386 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1008 19:16:08.485866  585386 out.go:270] * 
	W1008 19:16:08.485954  585386 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.485971  585386 out.go:270] * 
	W1008 19:16:08.486761  585386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 19:16:08.489676  585386 out.go:201] 
	W1008 19:16:08.490756  585386 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 19:16:08.490790  585386 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1008 19:16:08.490817  585386 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1008 19:16:08.492204  585386 out.go:201] 
	
	
	==> CRI-O <==
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.698879855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415677698850176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37c91fec-e1d4-4659-9af3-eff810fa8047 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.699772618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74a4f43c-ab9b-4f5e-b6e5-132a84055c96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.699843556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74a4f43c-ab9b-4f5e-b6e5-132a84055c96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.699893062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=74a4f43c-ab9b-4f5e-b6e5-132a84055c96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.735281971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b17a8d8d-c1f9-479b-9a03-7296648b9c0d name=/runtime.v1.RuntimeService/Version
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.735366836Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b17a8d8d-c1f9-479b-9a03-7296648b9c0d name=/runtime.v1.RuntimeService/Version
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.736784006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2902bb7c-903c-4bb3-b936-ef2ecff4a1e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.737223542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415677737192713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2902bb7c-903c-4bb3-b936-ef2ecff4a1e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.737731121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68bb790b-66db-43e8-bb69-375d5d090b77 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.737808721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68bb790b-66db-43e8-bb69-375d5d090b77 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.737866853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=68bb790b-66db-43e8-bb69-375d5d090b77 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.769329024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=050e52fc-0953-4e5b-a2e4-110e18a20a09 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.769414259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=050e52fc-0953-4e5b-a2e4-110e18a20a09 name=/runtime.v1.RuntimeService/Version
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.770390049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3a1f03f-64f2-405e-ae96-b61834e820c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.770793541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415677770766422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3a1f03f-64f2-405e-ae96-b61834e820c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.771359302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3a48ef2-ff22-4fed-90f5-cff49876e11d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.771437882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3a48ef2-ff22-4fed-90f5-cff49876e11d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.771472031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f3a48ef2-ff22-4fed-90f5-cff49876e11d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.807445808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdae4780-5b50-424a-af88-b84c82c60abe name=/runtime.v1.RuntimeService/Version
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.807582317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdae4780-5b50-424a-af88-b84c82c60abe name=/runtime.v1.RuntimeService/Version
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.809263065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c32641f2-13ca-4e86-bda7-c20b059c4768 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.809831762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728415677809798850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c32641f2-13ca-4e86-bda7-c20b059c4768 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.810576688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fafa5f4-7eb5-4c5f-8f19-78237714996a name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.810667066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fafa5f4-7eb5-4c5f-8f19-78237714996a name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 19:27:57 old-k8s-version-256554 crio[632]: time="2024-10-08 19:27:57.810719825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2fafa5f4-7eb5-4c5f-8f19-78237714996a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050416] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044675] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.049563] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.581000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586261] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 8 19:08] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.059019] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068335] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.205375] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.133900] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.277385] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.210273] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.066679] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.142543] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +12.037421] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 8 19:12] systemd-fstab-generator[5070]: Ignoring "noauto" option for root device
	[Oct 8 19:14] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.062152] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:27:57 up 20 min,  0 users,  load average: 0.00, 0.02, 0.00
	Linux old-k8s-version-256554 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0005b6a20, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0000321e0, 0x24, 0x0, ...)
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: net.(*Dialer).DialContext(0xc000c3db00, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0000321e0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c5a280, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0000321e0, 0x24, 0x60, 0x7f43c9d7bd00, 0x118, ...)
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: net/http.(*Transport).dial(0xc000977400, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0000321e0, 0x24, 0x0, 0x0, 0x0, ...)
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: net/http.(*Transport).dialConn(0xc000977400, 0x4f7fe00, 0xc000122018, 0x0, 0xc000016300, 0x5, 0xc0000321e0, 0x24, 0x0, 0xc00096e120, ...)
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: net/http.(*Transport).dialConnFor(0xc000977400, 0xc000d1c4d0)
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]: created by net/http.(*Transport).queueForDial
	Oct 08 19:27:55 old-k8s-version-256554 kubelet[6880]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Oct 08 19:27:55 old-k8s-version-256554 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 08 19:27:55 old-k8s-version-256554 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 08 19:27:56 old-k8s-version-256554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 143.
	Oct 08 19:27:56 old-k8s-version-256554 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 08 19:27:56 old-k8s-version-256554 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 08 19:27:56 old-k8s-version-256554 kubelet[6889]: I1008 19:27:56.157481    6889 server.go:416] Version: v1.20.0
	Oct 08 19:27:56 old-k8s-version-256554 kubelet[6889]: I1008 19:27:56.157846    6889 server.go:837] Client rotation is on, will bootstrap in background
	Oct 08 19:27:56 old-k8s-version-256554 kubelet[6889]: I1008 19:27:56.159877    6889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 08 19:27:56 old-k8s-version-256554 kubelet[6889]: W1008 19:27:56.160769    6889 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 08 19:27:56 old-k8s-version-256554 kubelet[6889]: I1008 19:27:56.161199    6889 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 2 (245.839238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-256554" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (164.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (7200.058s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-981259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-brm5p" [7f132e1a-ce8c-44c3-b4b8-f0ff89295706] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (38m18s)
		TestNetworkPlugins/group/enable-default-cni (1m26s)
		TestNetworkPlugins/group/enable-default-cni/NetCatPod (4s)

                                                
                                                
goroutine 8866 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 34 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0000024e0, 0xc001407bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc00090a288, {0x51b8ae0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x52d0cc0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000600dc0)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000600dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00059fd00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 8627 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 8626
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 100 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8341 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 8340
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8827 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a80200, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 8822
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 8797 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a80190, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001d77d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a80200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013166f0, {0x3918060, 0xc000a7f3b0}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013166f0, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8827
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3384 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3327
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 162 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0014a5f50, 0xc00092cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0xd8?, 0xc0014a5f50, 0xc0014a5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0xc000002340?, 0x55a000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014a5fd0?, 0x5944c4?, 0xc0007f42d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 101
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 129 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0005bfc90, 0x2c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000931d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0005bfcc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001368b90, {0x3918060, 0xc001398390}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001368b90, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 101
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 163 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 162
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4184 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 101 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0005bfcc0, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 148
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 8798 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc001854f50, 0xc001854f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x10?, 0xc001854f50, 0xc001854f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0xc000003ba0?, 0x55a000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001854fd0?, 0x5944c4?, 0xc0013d0fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8827
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8631 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 8630
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 7885 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0018a7ad0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0016b3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018a7b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00181ec60, {0x3918060, 0xc001970a80}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00181ec60, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7882
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 8626 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0014a2f50, 0xc0014a2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x0?, 0xc0014a2f50, 0xc0014a2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x9e9f56?, 0xc0019db500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014a2fd0?, 0x5944c4?, 0xc001aaf560?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8614
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4157 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4156
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 8614 [chan receive, 1 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0015e0640, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 8529
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 7882 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018a7b00, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7952
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 8143 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 8142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8826 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 8822
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8245 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 8244
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1317 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0000b8750, 0xc0000ccf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0xa0?, 0xc0000b8750, 0xc0000b8798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594465?, 0xc00140c000?, 0xc0001129a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1306
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8355 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc001d7af50, 0xc001d7af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x0?, 0xc001d7af50, 0xc001d7af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x9e9f56?, 0xc001480f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001d7afd0?, 0x9f82c5?, 0xc00143e300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3385 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0015e0240, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3327
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1306 [chan receive, 97 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000896a80, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1229
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3319 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3318
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 8465 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0015e0610, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000cdd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0015e0640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009f8a70, {0x3918060, 0xc001fa0030}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009f8a70, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8614
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1322 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc00145e300, 0xc000112c40)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1259
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2807 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00149f1e0, 0xc0013c4360)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2561
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3397 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3396
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7886 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0016b1f50, 0xc0016b1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0xd0?, 0xc0016b1f50, 0xc0016b1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0xc0016b62a0?, 0xc001cb8080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0016b1fd0?, 0x5944c4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7882
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8632 [chan receive, 1 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a444c0, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 8630
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1318 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1317
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1098 [IO wait, 101 minutes]:
internal/poll.runtime_pollWait(0x7f974042db70, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001f30100?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc001f30100)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc001f30100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000a801c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000a801c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000246e10, {0x3943f60, 0xc000a801c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000246e10)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00149e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1095
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2561 [chan receive, 38 minutes]:
testing.(*T).Run(0xc0014881a0, {0x2c3dfa7?, 0x55983c?}, 0xc0013c4360)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014881a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0014881a0, 0x35db4b0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 8605 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0014a3750, 0xc0014a3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x0?, 0xc0014a3750, 0xc0014a3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x9e9f56?, 0xc00184c300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014a37d0?, 0x5944c4?, 0xc0019fc000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8632
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8606 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 8605
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7887 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7886
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 8659 [IO wait]:
internal/poll.runtime_pollWait(0x7f974042d018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000719200?, 0xc0019eb000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000719200, {0xc0019eb000, 0x3000, 0x3000})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000719200, {0xc0019eb000?, 0x10?, 0xc0015e78a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001a4c420, {0xc0019eb000?, 0xc0019eb005?, 0x1a?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0014780f0, {0xc0019eb000?, 0x0?, 0xc0014780f0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0015df438, {0x39186a0, 0xc0014780f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0015df188, {0x7f97403d8210, 0xc00180fd70}, 0xc0015e7a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0015df188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0015df188, {0xc001a08000, 0x1000, 0xc00149d340?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc002076120, {0xc001988ac0, 0x9, 0x5169880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3916760, 0xc002076120}, {0xc001988ac0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001988ac0, 0x9, 0x47bbe5?}, {0x3916760?, 0xc002076120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001988a80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0015e7fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00184c780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 8658
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1305 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1229
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3396 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0000b9750, 0xc00092bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x80?, 0xc0000b9750, 0xc0000b9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x9e9f56?, 0xc001480f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b97d0?, 0x5944c4?, 0xc00140f180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3385
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8851 [IO wait]:
internal/poll.runtime_pollWait(0x7f974042dc78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000719780?, 0xc001a0a000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000719780, {0xc001a0a000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000719780, {0xc001a0a000?, 0x10?, 0xc00092e8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001a4c628, {0xc001a0a000?, 0xc001a0a005?, 0x1a?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc00180e900, {0xc001a0a000?, 0x0?, 0xc00180e900?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0009c6638, {0x39186a0, 0xc00180e900})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0009c6388, {0x7f97403d8210, 0xc00180e2e8}, 0xc00092ea10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0009c6388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0009c6388, {0xc0018d3000, 0x1000, 0xc001c2ba40?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001e1c360, {0xc0006192a0, 0x9, 0x5169880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3916760, 0xc001e1c360}, {0xc0006192a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0006192a0, 0x9, 0x47bbe5?}, {0x3916760?, 0xc001e1c360?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000619260)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00092efa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001481e00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 8850
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3395 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0015e0210, 0x14)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001a77d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0015e0240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001974820, {0x3918060, 0xc002100e40}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001974820, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3385
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3317 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc00145c4d0, 0x15)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001401d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00145c500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b2a280, {0x3918060, 0xc00131da70}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b2a280, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3248
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 8244 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc001d7b750, 0xc001d7b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x8e?, 0xc001d7b750, 0xc001d7b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0xc000003d40?, 0x55a000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001d7b7d0?, 0x5944c4?, 0xc001f30c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8423 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0016b4f50, 0xc0016b4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0xe0?, 0xc0016b4f50, 0xc0016b4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x1000000009e9f56?, 0xc001514600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594465?, 0xc0013edc80?, 0xc0017feee0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1563 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc001d3ef00, 0xc001d3af50)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1562
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4155 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0018a6850, 0x11)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0015e2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018a6880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b06040, {0x3918060, 0xc0008000c0}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b06040, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2830 [chan receive]:
testing.(*T).Run(0xc00149f860, {0x2c4663e?, 0x390e918?}, 0xc001b1dd70)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00149f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc00149f860, 0xc001f30600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2807
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 8354 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000a80590, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001a70d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a805c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bd0130, {0x3918060, 0xc001b1c030}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bd0130, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1730 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017f5200, 0xc0019f9030)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1729
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 8799 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 8798
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 8356 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 8355
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1316 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000896a50, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014f1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000896a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009fcd30, {0x3918060, 0xc0009ba930}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009fcd30, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1306
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3248 [chan receive, 34 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00145c500, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3281
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 8422 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0013e7690, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0018d9580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0013e76c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001ae7310, {0x3918060, 0xc001aafaa0}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001ae7310, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 7881 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 7952
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8424 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 8423
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 8604 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc001a44490, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001d76580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a444c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017b54b0, {0x3918060, 0xc001398690}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017b54b0, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8632
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 8342 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a805c0, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 8340
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3318 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc0014a2750, 0xc001a73f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x60?, 0xc0014a2750, 0xc0014a2798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594465?, 0xc000216c00?, 0xc000086b60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3248
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 8822 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3950cd8, 0xc0005cf490}, {0x39445c0, 0xc00071ab20}, 0x1, 0x0, 0xc001b07be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3950cd8?, 0xc00049bdc0?}, 0x3b9aca00, 0xc000091dd8?, 0x1, 0xc000091be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3950cd8, 0xc00049bdc0}, 0xc0014889c0, {0xc0015fc300, 0x19}, {0x2c41b5f, 0x7}, {0x2c4860f, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc0014889c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc0014889c0, 0xc001b1dd70)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2830
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 8440 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0013e76c0, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 8435
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1529 [select, 97 minutes]:
net/http.(*persistConn).readLoop(0xc001a38c60)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1527
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 1530 [select, 97 minutes]:
net/http.(*persistConn).writeLoop(0xc001a38c60)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1527
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 8831 [select]:
golang.org/x/net/http2.(*ClientConn).Ping(0xc00184c780, {0x3950cd8, 0xc0005cf5e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:3061 +0x2c5
golang.org/x/net/http2.(*ClientConn).healthCheck(0xc00184c780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:876 +0xb1
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 8243 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001954310, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001d77580?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396c800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001954340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00131a1d0, {0x3918060, 0xc001aaee10}, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00131a1d0, 0x3b9aca00, 0x0, 0x1, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 8144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 8613 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 8529
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3247 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3281
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8144 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001954340, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 8142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4156 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3950ff0, 0xc0000862a0}, 0xc001407f50, 0xc001407f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3950ff0, 0xc0000862a0}, 0x80?, 0xc001407f50, 0xc001407f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3950ff0?, 0xc0000862a0?}, 0x9e9f56?, 0xc0019f2900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0018dffd0?, 0x5944c4?, 0xc001fc8a80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4185 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018a6880, 0xc0000862a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 8439 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39472a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 8435
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                    

Test pass (193/265)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 3.86
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 54.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 129.32
31 TestAddons/serial/GCPAuth/Namespaces 1.29
34 TestAddons/parallel/Registry 20.01
36 TestAddons/parallel/InspektorGadget 12.19
39 TestAddons/parallel/CSI 54.08
40 TestAddons/parallel/Headlamp 12.2
41 TestAddons/parallel/CloudSpanner 6.95
42 TestAddons/parallel/LocalPath 10.12
43 TestAddons/parallel/NvidiaDevicePlugin 6.83
44 TestAddons/parallel/Yakd 12.31
46 TestCertOptions 80.14
47 TestCertExpiration 256.8
49 TestForceSystemdFlag 76.7
50 TestForceSystemdEnv 47.67
52 TestKVMDriverInstallOrUpdate 1.23
56 TestErrorSpam/setup 43.61
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.73
59 TestErrorSpam/pause 1.55
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 5.1
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 86.56
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.08
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
73 TestFunctional/serial/CacheCmd/cache/add_local 1.07
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
81 TestFunctional/serial/ExtraConfig 31.21
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.32
84 TestFunctional/serial/LogsFileCmd 1.37
85 TestFunctional/serial/InvalidService 4.63
87 TestFunctional/parallel/ConfigCmd 0.39
88 TestFunctional/parallel/DashboardCmd 13.36
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.9
95 TestFunctional/parallel/ServiceCmdConnect 10.62
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 36.77
99 TestFunctional/parallel/SSHCmd 0.46
100 TestFunctional/parallel/CpCmd 1.33
101 TestFunctional/parallel/MySQL 22.28
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.59
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 0.16
112 TestFunctional/parallel/ServiceCmd/DeployApp 12.25
113 TestFunctional/parallel/Version/short 0.05
114 TestFunctional/parallel/Version/components 0.7
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
120 TestFunctional/parallel/ImageCommands/Setup 0.5
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.7
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.96
136 TestFunctional/parallel/ServiceCmd/List 0.54
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.81
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
143 TestFunctional/parallel/ServiceCmd/Format 0.34
144 TestFunctional/parallel/ServiceCmd/URL 0.41
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
146 TestFunctional/parallel/ProfileCmd/profile_list 0.5
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
148 TestFunctional/parallel/MountCmd/any-port 18.4
149 TestFunctional/parallel/MountCmd/specific-port 1.71
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 191.07
158 TestMultiControlPlane/serial/DeployApp 9.58
159 TestMultiControlPlane/serial/PingHostFromPods 1.22
160 TestMultiControlPlane/serial/AddWorkerNode 54.09
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
163 TestMultiControlPlane/serial/CopyFile 12.82
172 TestMultiControlPlane/serial/RestartCluster 241.08
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
174 TestMultiControlPlane/serial/AddSecondaryNode 70.84
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
179 TestJSONOutput/start/Command 53.05
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.61
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.33
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 85.55
211 TestMountStart/serial/StartWithMountFirst 27.35
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 27.15
214 TestMountStart/serial/VerifyMountSecond 0.38
215 TestMountStart/serial/DeleteFirst 0.71
216 TestMountStart/serial/VerifyMountPostDelete 0.38
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 22.39
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 109.63
223 TestMultiNode/serial/DeployApp2Nodes 4.9
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 48.04
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.58
228 TestMultiNode/serial/CopyFile 7.18
229 TestMultiNode/serial/StopNode 2.3
230 TestMultiNode/serial/StartAfterStop 36.39
232 TestMultiNode/serial/DeleteNode 2.12
234 TestMultiNode/serial/RestartMultiNode 181.05
235 TestMultiNode/serial/ValidateNameConflict 45.18
242 TestScheduledStopUnix 113.31
246 TestRunningBinaryUpgrade 160.73
251 TestPause/serial/Start 106.99
252 TestStoppedBinaryUpgrade/Setup 0.56
253 TestStoppedBinaryUpgrade/Upgrade 176.25
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
265 TestNoKubernetes/serial/StartWithK8s 65.09
277 TestNoKubernetes/serial/StartWithStopK8s 46.98
278 TestNoKubernetes/serial/Start 24.1
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
280 TestNoKubernetes/serial/ProfileList 6.65
281 TestNoKubernetes/serial/Stop 1.32
282 TestNoKubernetes/serial/StartNoArgs 58.35
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
287 TestStartStop/group/no-preload/serial/FirstStart 101.05
289 TestStartStop/group/embed-certs/serial/FirstStart 87.68
290 TestStartStop/group/no-preload/serial/DeployApp 10.31
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.33
295 TestStartStop/group/embed-certs/serial/DeployApp 10.25
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
302 TestStartStop/group/no-preload/serial/SecondStart 646.33
307 TestStartStop/group/embed-certs/serial/SecondStart 519.66
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 551.36
309 TestStartStop/group/old-k8s-version/serial/Stop 4.28
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
321 TestStartStop/group/newest-cni/serial/FirstStart 48.83
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
325 TestStartStop/group/newest-cni/serial/Stop 7.49
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.99
327 TestStartStop/group/newest-cni/serial/SecondStart 38.35
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
336 TestStartStop/group/newest-cni/serial/Pause 4.83
x
+
TestDownloadOnly/v1.20.0/json-events (8.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-463465 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-463465 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.657223816s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1008 17:33:35.208098  537013 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1008 17:33:35.208196  537013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-463465
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-463465: exit status 85 (64.555866ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-463465 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |          |
	|         | -p download-only-463465        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:33:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:33:26.594447  537025 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:33:26.594697  537025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:26.594705  537025 out.go:358] Setting ErrFile to fd 2...
	I1008 17:33:26.594709  537025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:26.594932  537025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	W1008 17:33:26.595053  537025 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19774-529764/.minikube/config/config.json: open /home/jenkins/minikube-integration/19774-529764/.minikube/config/config.json: no such file or directory
	I1008 17:33:26.595607  537025 out.go:352] Setting JSON to true
	I1008 17:33:26.596643  537025 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4559,"bootTime":1728404248,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:33:26.596748  537025 start.go:139] virtualization: kvm guest
	I1008 17:33:26.599175  537025 out.go:97] [download-only-463465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1008 17:33:26.599285  537025 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 17:33:26.599342  537025 notify.go:220] Checking for updates...
	I1008 17:33:26.600754  537025 out.go:169] MINIKUBE_LOCATION=19774
	I1008 17:33:26.602090  537025 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:33:26.603461  537025 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:33:26.604663  537025 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:26.606213  537025 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 17:33:26.608553  537025 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 17:33:26.608752  537025 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:33:26.639792  537025 out.go:97] Using the kvm2 driver based on user configuration
	I1008 17:33:26.639817  537025 start.go:297] selected driver: kvm2
	I1008 17:33:26.639824  537025 start.go:901] validating driver "kvm2" against <nil>
	I1008 17:33:26.640114  537025 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:33:26.640195  537025 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19774-529764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 17:33:26.654967  537025 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1008 17:33:26.655022  537025 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 17:33:26.655530  537025 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1008 17:33:26.655674  537025 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 17:33:26.655701  537025 cni.go:84] Creating CNI manager for ""
	I1008 17:33:26.655748  537025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 17:33:26.655757  537025 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 17:33:26.655804  537025 start.go:340] cluster config:
	{Name:download-only-463465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-463465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:33:26.655971  537025 iso.go:125] acquiring lock: {Name:mk4048b095336416b7ad19e9602d73d6f6e69078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 17:33:26.657595  537025 out.go:97] Downloading VM boot image ...
	I1008 17:33:26.657624  537025 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1008 17:33:29.911256  537025 out.go:97] Starting "download-only-463465" primary control-plane node in "download-only-463465" cluster
	I1008 17:33:29.911280  537025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 17:33:29.937494  537025 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1008 17:33:29.937521  537025 cache.go:56] Caching tarball of preloaded images
	I1008 17:33:29.937670  537025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1008 17:33:29.939153  537025 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1008 17:33:29.939173  537025 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1008 17:33:29.971613  537025 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-463465 host does not exist
	  To start a cluster, run: "minikube start -p download-only-463465"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-463465
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-691270 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-691270 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.860684854s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1008 17:33:39.396810  537013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1008 17:33:39.396863  537013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19774-529764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-691270
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-691270: exit status 85 (63.017243ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-463465 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | -p download-only-463465        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| delete  | -p download-only-463465        | download-only-463465 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC | 08 Oct 24 17:33 UTC |
	| start   | -o=json --download-only        | download-only-691270 | jenkins | v1.34.0 | 08 Oct 24 17:33 UTC |                     |
	|         | -p download-only-691270        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 17:33:35
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 17:33:35.578015  537212 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:33:35.578128  537212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:35.578138  537212 out.go:358] Setting ErrFile to fd 2...
	I1008 17:33:35.578142  537212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:33:35.578306  537212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:33:35.578866  537212 out.go:352] Setting JSON to true
	I1008 17:33:35.579838  537212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4568,"bootTime":1728404248,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:33:35.579944  537212 start.go:139] virtualization: kvm guest
	I1008 17:33:35.581922  537212 out.go:97] [download-only-691270] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:33:35.582069  537212 notify.go:220] Checking for updates...
	I1008 17:33:35.583318  537212 out.go:169] MINIKUBE_LOCATION=19774
	I1008 17:33:35.584685  537212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:33:35.586028  537212 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:33:35.587182  537212 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:33:35.588239  537212 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-691270 host does not exist
	  To start a cluster, run: "minikube start -p download-only-691270"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-691270
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 17:33:39.980855  537013 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-340266 --alsologtostderr --binary-mirror http://127.0.0.1:46361 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-340266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-340266
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (54.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-907125 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-907125 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (53.923235402s)
helpers_test.go:175: Cleaning up "offline-crio-907125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-907125
--- PASS: TestOffline (54.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-738106
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-738106: exit status 85 (59.070721ms)

                                                
                                                
-- stdout --
	* Profile "addons-738106" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-738106"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-738106
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-738106: exit status 85 (59.865339ms)

                                                
                                                
-- stdout --
	* Profile "addons-738106" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-738106"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (129.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-738106 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-738106 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.322260176s)
--- PASS: TestAddons/Setup (129.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-738106 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-738106 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-738106 get secret gcp-auth -n new-namespace: exit status 1 (76.367644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-738106 logs -l app=gcp-auth -n gcp-auth
I1008 17:35:50.494958  537013 retry.go:31] will retry after 1.003072988s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/10/08 17:35:49 GCP Auth Webhook started!
	2024/10/08 17:35:50 Ready to marshal response ...
	2024/10/08 17:35:50 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-738106 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.477819ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-wsg7d" [1e47d1a8-5e9a-4214-9302-306efa48abeb] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003464698s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6hj56" [0c50d7bc-8a1f-4eb6-a83a-d29fda2e2722] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007159323s
addons_test.go:331: (dbg) Run:  kubectl --context addons-738106 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-738106 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-738106 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.138666701s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 ip
2024/10/08 17:44:21 [DEBUG] GET http://192.168.39.48:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pr252" [b2cdb400-10c5-4e71-b0d3-f068655ec286] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003662383s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 addons disable inspektor-gadget --alsologtostderr -v=1: (6.182659431s)
--- PASS: TestAddons/parallel/InspektorGadget (12.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1008 17:44:14.926944  537013 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1008 17:44:14.931506  537013 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1008 17:44:14.931531  537013 kapi.go:107] duration metric: took 4.598546ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.606212ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-738106 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-738106 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [61d408d2-60fd-4a9a-98bb-24b1e7d2737d] Pending
helpers_test.go:344: "task-pv-pod" [61d408d2-60fd-4a9a-98bb-24b1e7d2737d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [61d408d2-60fd-4a9a-98bb-24b1e7d2737d] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004340127s
addons_test.go:511: (dbg) Run:  kubectl --context addons-738106 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-738106 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-738106 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-738106 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-738106 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-738106 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-738106 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7d0761a8-c6cb-4829-8a78-e7e1de94dba6] Pending
helpers_test.go:344: "task-pv-pod-restore" [7d0761a8-c6cb-4829-8a78-e7e1de94dba6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7d0761a8-c6cb-4829-8a78-e7e1de94dba6] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004156094s
addons_test.go:553: (dbg) Run:  kubectl --context addons-738106 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-738106 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-738106 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.666843247s)
--- PASS: TestAddons/parallel/CSI (54.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-738106 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-tn9fh" [54229a0d-9b3f-4514-9ca0-4cb2050631c8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-tn9fh" [54229a0d-9b3f-4514-9ca0-4cb2050631c8] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004657523s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-5ftt2" [6b75a276-8d2f-44b6-a70a-ce83032b2a7b] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004447372s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-738106 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-738106 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d7ddd615-ca27-4b88-872d-66121862bcdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d7ddd615-ca27-4b88-872d-66121862bcdc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d7ddd615-ca27-4b88-872d-66121862bcdc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003189907s
addons_test.go:901: (dbg) Run:  kubectl --context addons-738106 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 ssh "cat /opt/local-path-provisioner/pvc-d1d617de-cc0c-4dd9-bd33-d96d94d0bb04_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-738106 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-738106 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dz2k9" [42202b26-4c49-44bb-836f-cfcd7b7a3a5f] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004337725s
addons_test.go:961: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-738106
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.83s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4rb8v" [c765f3a4-6f3f-42e7-a39e-a548343d2fdb] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004360461s
addons_test.go:973: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-amd64 -p addons-738106 addons disable yakd --alsologtostderr -v=1: (6.307538651s)
--- PASS: TestAddons/parallel/Yakd (12.31s)

                                                
                                    
x
+
TestCertOptions (80.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-773474 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1008 18:56:38.897226  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-773474 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m18.917152497s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-773474 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-773474 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-773474 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-773474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-773474
--- PASS: TestCertOptions (80.14s)

                                                
                                    
x
+
TestCertExpiration (256.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-439352 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-439352 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (47.721501492s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-439352 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-439352 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (28.080920554s)
helpers_test.go:175: Cleaning up "cert-expiration-439352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-439352
--- PASS: TestCertExpiration (256.80s)

                                                
                                    
x
+
TestForceSystemdFlag (76.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-254330 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-254330 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.674205965s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-254330 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-254330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-254330
--- PASS: TestForceSystemdFlag (76.70s)

                                                
                                    
x
+
TestForceSystemdEnv (47.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-193077 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-193077 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.844283854s)
helpers_test.go:175: Cleaning up "force-systemd-env-193077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-193077
I1008 18:55:18.077294  537013 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2267052280/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80] Decompressors:map[bz2:0xc0007a03b8 gz:0xc0007a0440 tar:0xc0007a03f0 tar.bz2:0xc0007a0400 tar.gz:0xc0007a0410 tar.xz:0xc0007a0420 tar.zst:0xc0007a0430 tbz2:0xc0007a0400 tgz:0xc0007a0410 txz:0xc0007a0420 tzst:0xc0007a0430 xz:0xc0007a0448 zip:0xc0007a0450 zst:0xc0007a0460] Getters:map[file:0xc001974e20 http:0xc000075590 https:0xc0000755e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1008 18:55:18.077349  537013 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2267052280/001/docker-machine-driver-kvm2
I1008 18:55:18.679238  537013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 18:55:18.679321  537013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1008 18:55:18.706941  537013 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1008 18:55:18.706975  537013 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1008 18:55:18.707047  537013 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1008 18:55:18.707080  537013 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2267052280/002/docker-machine-driver-kvm2
I1008 18:55:18.730681  537013 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2267052280/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80 0x52f4c80] Decompressors:map[bz2:0xc0007a03b8 gz:0xc0007a0440 tar:0xc0007a03f0 tar.bz2:0xc0007a0400 tar.gz:0xc0007a0410 tar.xz:0xc0007a0420 tar.zst:0xc0007a0430 tbz2:0xc0007a0400 tgz:0xc0007a0410 txz:0xc0007a0420 tzst:0xc0007a0430 xz:0xc0007a0448 zip:0xc0007a0450 zst:0xc0007a0460] Getters:map[file:0xc001975af0 http:0xc0008e95e0 https:0xc0008e9630] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1008 18:55:18.730725  537013 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2267052280/002/docker-machine-driver-kvm2
--- PASS: TestForceSystemdEnv (47.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1008 18:55:17.902340  537013 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 18:55:17.902507  537013 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1008 18:55:17.931321  537013 install.go:62] docker-machine-driver-kvm2: exit status 1
W1008 18:55:17.931613  537013 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1008 18:55:17.931670  537013 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2267052280/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                    
x
+
TestErrorSpam/setup (43.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-438881 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-438881 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-438881 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-438881 --driver=kvm2  --container-runtime=crio: (43.605270581s)
--- PASS: TestErrorSpam/setup (43.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 stop: (2.303372088s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 stop: (1.310759137s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-438881 --log_dir /tmp/nospam-438881 stop: (1.484219588s)
--- PASS: TestErrorSpam/stop (5.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19774-529764/.minikube/files/etc/test/nested/copy/537013/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-922806 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-922806 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.555782634s)
--- PASS: TestFunctional/serial/StartWithProxy (86.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 17:55:12.070124  537013 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-922806 --alsologtostderr -v=8
E1008 17:55:51.764511  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:51.770863  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:51.782163  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:51.803431  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:51.844794  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:51.927046  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:52.088633  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:52.410273  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:55:53.052171  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-922806 --alsologtostderr -v=8: (41.083880012s)
functional_test.go:663: soft start took 41.084649549s for "functional-922806" cluster.
I1008 17:55:53.154399  537013 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (41.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-922806 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cache add registry.k8s.io/pause:3.1
E1008 17:55:54.334256  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 cache add registry.k8s.io/pause:3.1: (1.106233041s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 cache add registry.k8s.io/pause:3.3: (1.207995191s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 cache add registry.k8s.io/pause:latest: (1.103345005s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-922806 /tmp/TestFunctionalserialCacheCmdcacheadd_local611082604/001
E1008 17:55:56.895884  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cache add minikube-local-cache-test:functional-922806
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cache delete minikube-local-cache-test:functional-922806
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-922806
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.91451ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 kubectl -- --context functional-922806 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-922806 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-922806 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1008 17:56:02.017194  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 17:56:12.259431  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-922806 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.207290119s)
functional_test.go:761: restart took 31.207425865s for "functional-922806" cluster.
I1008 17:56:31.270509  537013 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (31.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-922806 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 logs: (1.317557615s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 logs --file /tmp/TestFunctionalserialLogsFileCmd1227507533/001/logs.txt
E1008 17:56:32.740765  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 logs --file /tmp/TestFunctionalserialLogsFileCmd1227507533/001/logs.txt: (1.36585427s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-922806 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-922806
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-922806: exit status 115 (275.092511ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.244:31610 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-922806 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-922806 delete -f testdata/invalidsvc.yaml: (1.155374376s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 config get cpus: exit status 14 (76.904207ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 config get cpus: exit status 14 (54.336746ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-922806 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-922806 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 546734: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-922806 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-922806 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.446428ms)

                                                
                                                
-- stdout --
	* [functional-922806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 17:56:39.240138  546290 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:56:39.240275  546290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:56:39.240286  546290 out.go:358] Setting ErrFile to fd 2...
	I1008 17:56:39.240292  546290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:56:39.240522  546290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:56:39.241099  546290 out.go:352] Setting JSON to false
	I1008 17:56:39.242112  546290 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5951,"bootTime":1728404248,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:56:39.242180  546290 start.go:139] virtualization: kvm guest
	I1008 17:56:39.244091  546290 out.go:177] * [functional-922806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1008 17:56:39.245354  546290 notify.go:220] Checking for updates...
	I1008 17:56:39.245388  546290 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:56:39.246779  546290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:56:39.247996  546290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:56:39.249206  546290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:56:39.250309  546290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:56:39.252267  546290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:56:39.254235  546290 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:56:39.254996  546290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:56:39.255061  546290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:56:39.271572  546290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I1008 17:56:39.272096  546290 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:56:39.272683  546290 main.go:141] libmachine: Using API Version  1
	I1008 17:56:39.272704  546290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:56:39.273202  546290 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:56:39.273405  546290 main.go:141] libmachine: (functional-922806) Calling .DriverName
	I1008 17:56:39.273716  546290 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:56:39.274033  546290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:56:39.274080  546290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:56:39.291537  546290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I1008 17:56:39.292071  546290 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:56:39.292586  546290 main.go:141] libmachine: Using API Version  1
	I1008 17:56:39.292616  546290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:56:39.292940  546290 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:56:39.293129  546290 main.go:141] libmachine: (functional-922806) Calling .DriverName
	I1008 17:56:39.332431  546290 out.go:177] * Using the kvm2 driver based on existing profile
	I1008 17:56:39.333703  546290 start.go:297] selected driver: kvm2
	I1008 17:56:39.333722  546290 start.go:901] validating driver "kvm2" against &{Name:functional-922806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-922806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:56:39.333863  546290 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:56:39.336079  546290 out.go:201] 
	W1008 17:56:39.337177  546290 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 17:56:39.338423  546290 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-922806 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-922806 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-922806 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (154.053077ms)

                                                
                                                
-- stdout --
	* [functional-922806] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 17:56:39.097944  546229 out.go:345] Setting OutFile to fd 1 ...
	I1008 17:56:39.098082  546229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:56:39.098095  546229 out.go:358] Setting ErrFile to fd 2...
	I1008 17:56:39.098101  546229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 17:56:39.098534  546229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 17:56:39.099230  546229 out.go:352] Setting JSON to false
	I1008 17:56:39.100632  546229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5951,"bootTime":1728404248,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 17:56:39.100768  546229 start.go:139] virtualization: kvm guest
	I1008 17:56:39.103109  546229 out.go:177] * [functional-922806] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1008 17:56:39.104440  546229 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 17:56:39.104482  546229 notify.go:220] Checking for updates...
	I1008 17:56:39.106857  546229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 17:56:39.108204  546229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	I1008 17:56:39.109472  546229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	I1008 17:56:39.110835  546229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 17:56:39.112034  546229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 17:56:39.113688  546229 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 17:56:39.114133  546229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:56:39.114194  546229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:56:39.132079  546229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I1008 17:56:39.132465  546229 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:56:39.133062  546229 main.go:141] libmachine: Using API Version  1
	I1008 17:56:39.133091  546229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:56:39.133425  546229 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:56:39.133613  546229 main.go:141] libmachine: (functional-922806) Calling .DriverName
	I1008 17:56:39.133872  546229 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 17:56:39.134166  546229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 17:56:39.134192  546229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 17:56:39.149420  546229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I1008 17:56:39.149859  546229 main.go:141] libmachine: () Calling .GetVersion
	I1008 17:56:39.150380  546229 main.go:141] libmachine: Using API Version  1
	I1008 17:56:39.150415  546229 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 17:56:39.150790  546229 main.go:141] libmachine: () Calling .GetMachineName
	I1008 17:56:39.150973  546229 main.go:141] libmachine: (functional-922806) Calling .DriverName
	I1008 17:56:39.183402  546229 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1008 17:56:39.184557  546229 start.go:297] selected driver: kvm2
	I1008 17:56:39.184581  546229 start.go:901] validating driver "kvm2" against &{Name:functional-922806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-922806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 17:56:39.184693  546229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 17:56:39.186713  546229 out.go:201] 
	W1008 17:56:39.187772  546229 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 17:56:39.188822  546229 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-922806 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-922806 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p4ks2" [ac5758d9-ca63-4525-a880-7cfe6a7b3f4b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-p4ks2" [ac5758d9-ca63-4525-a880-7cfe6a7b3f4b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003873306s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.244:31050
functional_test.go:1675: http://192.168.39.244:31050: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-p4ks2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.244:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.244:31050
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [252e0f23-f3e1-4d7e-9070-a80810d8df04] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006655468s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-922806 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-922806 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-922806 get pvc myclaim -o=json
I1008 17:56:47.156505  537013 retry.go:31] will retry after 2.82695708s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:32d9cb8b-f2a0-4588-ae38-5e31a516b945 ResourceVersion:761 Generation:0 CreationTimestamp:2024-10-08 17:56:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b2ba70 VolumeMode:0xc001b2ba80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-922806 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-922806 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c0ea9f86-831a-481d-bbb9-5a5b41bc2bcf] Pending
helpers_test.go:344: "sp-pod" [c0ea9f86-831a-481d-bbb9-5a5b41bc2bcf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c0ea9f86-831a-481d-bbb9-5a5b41bc2bcf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004426209s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-922806 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-922806 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-922806 delete -f testdata/storage-provisioner/pod.yaml: (4.624258369s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-922806 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2abaaee7-2260-49b6-a623-24cc4e943c6e] Pending
helpers_test.go:344: "sp-pod" [2abaaee7-2260-49b6-a623-24cc4e943c6e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2abaaee7-2260-49b6-a623-24cc4e943c6e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003655803s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-922806 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh -n functional-922806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cp functional-922806:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2525585447/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh -n functional-922806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh -n functional-922806 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-922806 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9jh7g" [487e6d70-f424-4b5c-bf5e-1bdbd46f478f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9jh7g" [487e6d70-f424-4b5c-bf5e-1bdbd46f478f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004152653s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-922806 exec mysql-6cdb49bbb-9jh7g -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-922806 exec mysql-6cdb49bbb-9jh7g -- mysql -ppassword -e "show databases;": exit status 1 (130.580725ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1008 17:57:14.326136  537013 retry.go:31] will retry after 787.617533ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-922806 exec mysql-6cdb49bbb-9jh7g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/537013/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /etc/test/nested/copy/537013/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/537013.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /etc/ssl/certs/537013.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/537013.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /usr/share/ca-certificates/537013.pem"
2024/10/08 17:56:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5370132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /etc/ssl/certs/5370132.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5370132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /usr/share/ca-certificates/5370132.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-922806 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh "sudo systemctl is-active docker": exit status 1 (219.404237ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh "sudo systemctl is-active containerd": exit status 1 (222.362373ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-922806 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-922806 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jvdrl" [31868dd5-eb1e-4461-ac34-c0d2e13e22b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jvdrl" [31868dd5-eb1e-4461-ac34-c0d2e13e22b4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004078878s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-922806 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/kicbase/echo-server           | functional-922806  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-922806  | a78bcd38a4381 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-922806  | bb15606376715 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-922806 image ls --format table --alsologtostderr:
I1008 17:57:11.477454  548202 out.go:345] Setting OutFile to fd 1 ...
I1008 17:57:11.477573  548202 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:11.477586  548202 out.go:358] Setting ErrFile to fd 2...
I1008 17:57:11.477590  548202 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:11.477785  548202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
I1008 17:57:11.478417  548202 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:11.478531  548202 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:11.478911  548202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:11.478966  548202 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:11.494258  548202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
I1008 17:57:11.494734  548202 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:11.495319  548202 main.go:141] libmachine: Using API Version  1
I1008 17:57:11.495342  548202 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:11.495735  548202 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:11.495949  548202 main.go:141] libmachine: (functional-922806) Calling .GetState
I1008 17:57:11.497822  548202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:11.497876  548202 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:11.512670  548202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
I1008 17:57:11.513187  548202 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:11.513727  548202 main.go:141] libmachine: Using API Version  1
I1008 17:57:11.513752  548202 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:11.514092  548202 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:11.514282  548202 main.go:141] libmachine: (functional-922806) Calling .DriverName
I1008 17:57:11.514513  548202 ssh_runner.go:195] Run: systemctl --version
I1008 17:57:11.514546  548202 main.go:141] libmachine: (functional-922806) Calling .GetSSHHostname
I1008 17:57:11.517410  548202 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:11.517844  548202 main.go:141] libmachine: (functional-922806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:4c:59", ip: ""} in network mk-functional-922806: {Iface:virbr1 ExpiryTime:2024-10-08 18:54:00 +0000 UTC Type:0 Mac:52:54:00:f7:4c:59 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-922806 Clientid:01:52:54:00:f7:4c:59}
I1008 17:57:11.517882  548202 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined IP address 192.168.39.244 and MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:11.518084  548202 main.go:141] libmachine: (functional-922806) Calling .GetSSHPort
I1008 17:57:11.518280  548202 main.go:141] libmachine: (functional-922806) Calling .GetSSHKeyPath
I1008 17:57:11.518438  548202 main.go:141] libmachine: (functional-922806) Calling .GetSSHUsername
I1008 17:57:11.518561  548202 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/functional-922806/id_rsa Username:docker}
I1008 17:57:11.636784  548202 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 17:57:11.691944  548202 main.go:141] libmachine: Making call to close driver server
I1008 17:57:11.691964  548202 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:11.692292  548202 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:11.692314  548202 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 17:57:11.692327  548202 main.go:141] libmachine: Making call to close driver server
I1008 17:57:11.692329  548202 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:11.692339  548202 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:11.692567  548202 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:11.692594  548202 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 17:57:11.692596  548202 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-922806 image ls --format json --alsologtostderr:
[{"id":"71fdfb6d8f2a674f9cdbac62466474cb92ee40b6a541693c29c6e0db1e3086b9","repoDigests":["docker.io/library/e0dc0de3ba90e0455fc4a929ef0e741206de2ee8e6fe2c6ec9afc154653fea6f-tmp@sha256:41223b8a50255ed30fa05f98d15f708b81222621008fe9579f49db5b2e5ae761"],"repoTags":[],"size":"1466018"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"9aa1fad941575ee
d91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62
e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5
048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":
"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gc
r.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-922806"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c7
8382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a78bcd38a43819f6f101da5d3aaf0b25b9488239055da23b119d0e0ade7a6153","repoDigests":["localhost/minikube-local-cache-test@sha256:07da495cca581c2561c02396e96ef5350d501aa395b43414cbd965c78cdfef9c"],"repoTags":["localhost/minikube-local-cache-test:functional-922806"],"size":"3330"},{"id":"bb156063767154be457111cfea51540b079c812895c7917cac77180c532016c2","repoDigests":["localhost/my-image@sha256:e8cab58ff1620a6ef3c193448ce176a2146d17d9eda5ab6caac99cd64340b963"],"repoTags":["localhost/my-image:functional-922806"],"size":"1468600"},{"id":"175ffd71cce3d90bae95904b55260db941
b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-922806 image ls --format json --alsologtostderr:
I1008 17:57:11.246611  548178 out.go:345] Setting OutFile to fd 1 ...
I1008 17:57:11.246903  548178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:11.246914  548178 out.go:358] Setting ErrFile to fd 2...
I1008 17:57:11.246919  548178 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:11.247077  548178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
I1008 17:57:11.247893  548178 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:11.248001  548178 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:11.248359  548178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:11.248412  548178 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:11.263305  548178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
I1008 17:57:11.263822  548178 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:11.264497  548178 main.go:141] libmachine: Using API Version  1
I1008 17:57:11.264530  548178 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:11.264931  548178 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:11.265118  548178 main.go:141] libmachine: (functional-922806) Calling .GetState
I1008 17:57:11.266947  548178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:11.266981  548178 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:11.281406  548178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
I1008 17:57:11.281867  548178 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:11.282288  548178 main.go:141] libmachine: Using API Version  1
I1008 17:57:11.282340  548178 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:11.282711  548178 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:11.282863  548178 main.go:141] libmachine: (functional-922806) Calling .DriverName
I1008 17:57:11.283052  548178 ssh_runner.go:195] Run: systemctl --version
I1008 17:57:11.283078  548178 main.go:141] libmachine: (functional-922806) Calling .GetSSHHostname
I1008 17:57:11.285977  548178 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:11.286453  548178 main.go:141] libmachine: (functional-922806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:4c:59", ip: ""} in network mk-functional-922806: {Iface:virbr1 ExpiryTime:2024-10-08 18:54:00 +0000 UTC Type:0 Mac:52:54:00:f7:4c:59 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-922806 Clientid:01:52:54:00:f7:4c:59}
I1008 17:57:11.286490  548178 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined IP address 192.168.39.244 and MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:11.286642  548178 main.go:141] libmachine: (functional-922806) Calling .GetSSHPort
I1008 17:57:11.286832  548178 main.go:141] libmachine: (functional-922806) Calling .GetSSHKeyPath
I1008 17:57:11.286984  548178 main.go:141] libmachine: (functional-922806) Calling .GetSSHUsername
I1008 17:57:11.287141  548178 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/functional-922806/id_rsa Username:docker}
I1008 17:57:11.368779  548178 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 17:57:11.421508  548178 main.go:141] libmachine: Making call to close driver server
I1008 17:57:11.421528  548178 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:11.421840  548178 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:11.421888  548178 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:11.421906  548178 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 17:57:11.421915  548178 main.go:141] libmachine: Making call to close driver server
I1008 17:57:11.421925  548178 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:11.422175  548178 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:11.422239  548178 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:11.422270  548178 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-922806 image ls --format yaml --alsologtostderr:
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: a78bcd38a43819f6f101da5d3aaf0b25b9488239055da23b119d0e0ade7a6153
repoDigests:
- localhost/minikube-local-cache-test@sha256:07da495cca581c2561c02396e96ef5350d501aa395b43414cbd965c78cdfef9c
repoTags:
- localhost/minikube-local-cache-test:functional-922806
size: "3330"
- id: bb156063767154be457111cfea51540b079c812895c7917cac77180c532016c2
repoDigests:
- localhost/my-image@sha256:e8cab58ff1620a6ef3c193448ce176a2146d17d9eda5ab6caac99cd64340b963
repoTags:
- localhost/my-image:functional-922806
size: "1468600"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 71fdfb6d8f2a674f9cdbac62466474cb92ee40b6a541693c29c6e0db1e3086b9
repoDigests:
- docker.io/library/e0dc0de3ba90e0455fc4a929ef0e741206de2ee8e6fe2c6ec9afc154653fea6f-tmp@sha256:41223b8a50255ed30fa05f98d15f708b81222621008fe9579f49db5b2e5ae761
repoTags: []
size: "1466018"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-922806
size: "4943877"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-922806 image ls --format yaml --alsologtostderr:
I1008 17:57:10.969452  548154 out.go:345] Setting OutFile to fd 1 ...
I1008 17:57:10.969722  548154 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:10.969734  548154 out.go:358] Setting ErrFile to fd 2...
I1008 17:57:10.969740  548154 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:10.969938  548154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
I1008 17:57:10.970556  548154 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:10.970673  548154 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:10.971074  548154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:10.971131  548154 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:10.985747  548154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
I1008 17:57:10.986260  548154 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:10.986844  548154 main.go:141] libmachine: Using API Version  1
I1008 17:57:10.986868  548154 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:10.987240  548154 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:10.987429  548154 main.go:141] libmachine: (functional-922806) Calling .GetState
I1008 17:57:10.989305  548154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:10.989343  548154 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:11.003925  548154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34669
I1008 17:57:11.004394  548154 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:11.004948  548154 main.go:141] libmachine: Using API Version  1
I1008 17:57:11.004974  548154 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:11.005371  548154 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:11.005611  548154 main.go:141] libmachine: (functional-922806) Calling .DriverName
I1008 17:57:11.005841  548154 ssh_runner.go:195] Run: systemctl --version
I1008 17:57:11.005872  548154 main.go:141] libmachine: (functional-922806) Calling .GetSSHHostname
I1008 17:57:11.009007  548154 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:11.009475  548154 main.go:141] libmachine: (functional-922806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:4c:59", ip: ""} in network mk-functional-922806: {Iface:virbr1 ExpiryTime:2024-10-08 18:54:00 +0000 UTC Type:0 Mac:52:54:00:f7:4c:59 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-922806 Clientid:01:52:54:00:f7:4c:59}
I1008 17:57:11.009505  548154 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined IP address 192.168.39.244 and MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:11.009641  548154 main.go:141] libmachine: (functional-922806) Calling .GetSSHPort
I1008 17:57:11.009810  548154 main.go:141] libmachine: (functional-922806) Calling .GetSSHKeyPath
I1008 17:57:11.009973  548154 main.go:141] libmachine: (functional-922806) Calling .GetSSHUsername
I1008 17:57:11.010130  548154 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/functional-922806/id_rsa Username:docker}
I1008 17:57:11.092724  548154 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 17:57:11.185723  548154 main.go:141] libmachine: Making call to close driver server
I1008 17:57:11.185738  548154 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:11.186049  548154 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:11.186071  548154 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 17:57:11.186086  548154 main.go:141] libmachine: Making call to close driver server
I1008 17:57:11.186094  548154 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:11.186360  548154 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:11.186394  548154 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:11.186409  548154 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh pgrep buildkitd: exit status 1 (236.835688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image build -t localhost/my-image:functional-922806 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 image build -t localhost/my-image:functional-922806 testdata/build --alsologtostderr: (3.594837434s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-922806 image build -t localhost/my-image:functional-922806 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 71fdfb6d8f2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-922806
--> bb156063767
Successfully tagged localhost/my-image:functional-922806
bb156063767154be457111cfea51540b079c812895c7917cac77180c532016c2
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-922806 image build -t localhost/my-image:functional-922806 testdata/build --alsologtostderr:
I1008 17:57:07.159927  548081 out.go:345] Setting OutFile to fd 1 ...
I1008 17:57:07.160054  548081 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:07.160063  548081 out.go:358] Setting ErrFile to fd 2...
I1008 17:57:07.160068  548081 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 17:57:07.160263  548081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
I1008 17:57:07.160893  548081 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:07.161731  548081 config.go:182] Loaded profile config "functional-922806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1008 17:57:07.162402  548081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:07.162463  548081 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:07.177815  548081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32955
I1008 17:57:07.178337  548081 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:07.178956  548081 main.go:141] libmachine: Using API Version  1
I1008 17:57:07.179004  548081 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:07.179423  548081 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:07.179624  548081 main.go:141] libmachine: (functional-922806) Calling .GetState
I1008 17:57:07.181764  548081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 17:57:07.181804  548081 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 17:57:07.196945  548081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
I1008 17:57:07.197417  548081 main.go:141] libmachine: () Calling .GetVersion
I1008 17:57:07.198024  548081 main.go:141] libmachine: Using API Version  1
I1008 17:57:07.198044  548081 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 17:57:07.198352  548081 main.go:141] libmachine: () Calling .GetMachineName
I1008 17:57:07.198547  548081 main.go:141] libmachine: (functional-922806) Calling .DriverName
I1008 17:57:07.198762  548081 ssh_runner.go:195] Run: systemctl --version
I1008 17:57:07.198795  548081 main.go:141] libmachine: (functional-922806) Calling .GetSSHHostname
I1008 17:57:07.201569  548081 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:07.202034  548081 main.go:141] libmachine: (functional-922806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:4c:59", ip: ""} in network mk-functional-922806: {Iface:virbr1 ExpiryTime:2024-10-08 18:54:00 +0000 UTC Type:0 Mac:52:54:00:f7:4c:59 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-922806 Clientid:01:52:54:00:f7:4c:59}
I1008 17:57:07.202065  548081 main.go:141] libmachine: (functional-922806) DBG | domain functional-922806 has defined IP address 192.168.39.244 and MAC address 52:54:00:f7:4c:59 in network mk-functional-922806
I1008 17:57:07.202203  548081 main.go:141] libmachine: (functional-922806) Calling .GetSSHPort
I1008 17:57:07.202380  548081 main.go:141] libmachine: (functional-922806) Calling .GetSSHKeyPath
I1008 17:57:07.202505  548081 main.go:141] libmachine: (functional-922806) Calling .GetSSHUsername
I1008 17:57:07.202639  548081 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/functional-922806/id_rsa Username:docker}
I1008 17:57:07.292626  548081 build_images.go:161] Building image from path: /tmp/build.4190766799.tar
I1008 17:57:07.292702  548081 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1008 17:57:07.306037  548081 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4190766799.tar
I1008 17:57:07.311621  548081 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4190766799.tar: stat -c "%s %y" /var/lib/minikube/build/build.4190766799.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4190766799.tar': No such file or directory
I1008 17:57:07.311649  548081 ssh_runner.go:362] scp /tmp/build.4190766799.tar --> /var/lib/minikube/build/build.4190766799.tar (3072 bytes)
I1008 17:57:07.346803  548081 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4190766799
I1008 17:57:07.361286  548081 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4190766799 -xf /var/lib/minikube/build/build.4190766799.tar
I1008 17:57:07.376460  548081 crio.go:315] Building image: /var/lib/minikube/build/build.4190766799
I1008 17:57:07.376546  548081 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-922806 /var/lib/minikube/build/build.4190766799 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1008 17:57:10.676767  548081 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-922806 /var/lib/minikube/build/build.4190766799 --cgroup-manager=cgroupfs: (3.30018956s)
I1008 17:57:10.676860  548081 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4190766799
I1008 17:57:10.689277  548081 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4190766799.tar
I1008 17:57:10.699107  548081 build_images.go:217] Built localhost/my-image:functional-922806 from /tmp/build.4190766799.tar
I1008 17:57:10.699143  548081 build_images.go:133] succeeded building to: functional-922806
I1008 17:57:10.699148  548081 build_images.go:134] failed building to: 
I1008 17:57:10.699168  548081 main.go:141] libmachine: Making call to close driver server
I1008 17:57:10.699182  548081 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:10.699498  548081 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:10.699518  548081 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:10.699523  548081 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 17:57:10.699538  548081 main.go:141] libmachine: Making call to close driver server
I1008 17:57:10.699547  548081 main.go:141] libmachine: (functional-922806) Calling .Close
I1008 17:57:10.699802  548081 main.go:141] libmachine: (functional-922806) DBG | Closing plugin on server side
I1008 17:57:10.699843  548081 main.go:141] libmachine: Successfully made call to close driver server
I1008 17:57:10.699855  548081 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-922806
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image load --daemon kicbase/echo-server:functional-922806 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 image load --daemon kicbase/echo-server:functional-922806 --alsologtostderr: (4.013416372s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image load --daemon kicbase/echo-server:functional-922806 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-922806
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image load --daemon kicbase/echo-server:functional-922806 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image save kicbase/echo-server:functional-922806 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image rm kicbase/echo-server:functional-922806 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-922806 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.677843688s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-922806
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 image save --daemon kicbase/echo-server:functional-922806 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-922806
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 service list -o json
functional_test.go:1494: Took "443.689938ms" to run "out/minikube-linux-amd64 -p functional-922806 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.244:30615
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.244:30615
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "447.112974ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.891192ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "355.543676ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.51398ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdany-port214574689/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728410215147304126" to /tmp/TestFunctionalparallelMountCmdany-port214574689/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728410215147304126" to /tmp/TestFunctionalparallelMountCmdany-port214574689/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728410215147304126" to /tmp/TestFunctionalparallelMountCmdany-port214574689/001/test-1728410215147304126
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (212.960888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 17:56:55.360597  537013 retry.go:31] will retry after 367.57761ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  8 17:56 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  8 17:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  8 17:56 test-1728410215147304126
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh cat /mount-9p/test-1728410215147304126
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-922806 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d4cbda85-42fa-405b-9f2f-8aed78e0d642] Pending
helpers_test.go:344: "busybox-mount" [d4cbda85-42fa-405b-9f2f-8aed78e0d642] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d4cbda85-42fa-405b-9f2f-8aed78e0d642] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d4cbda85-42fa-405b-9f2f-8aed78e0d642] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.005373249s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-922806 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdany-port214574689/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdspecific-port587796567/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T /mount-9p | grep 9p"
E1008 17:57:13.702542  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (190.855831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 17:57:13.738644  537013 retry.go:31] will retry after 531.187319ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdspecific-port587796567/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh "sudo umount -f /mount-9p": exit status 1 (191.521771ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-922806 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdspecific-port587796567/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786475790/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786475790/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786475790/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T" /mount1: exit status 1 (226.769676ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 17:57:15.484951  537013 retry.go:31] will retry after 699.733108ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-922806 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-922806 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786475790/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786475790/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-922806 /tmp/TestFunctionalparallelMountCmdVerifyCleanup786475790/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-922806
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-922806
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-922806
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-094095 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1008 17:58:35.624623  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-094095 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.401077579s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (191.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-094095 -- rollout status deployment/busybox: (7.290137532s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-gxdk6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-n779r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-rxwcg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-gxdk6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-n779r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-rxwcg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-gxdk6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-n779r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-rxwcg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-gxdk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-gxdk6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-n779r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-n779r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-rxwcg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-094095 -- exec busybox-7dff88458-rxwcg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-094095 -v=7 --alsologtostderr
E1008 18:00:51.764165  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:19.466841  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-094095 -v=7 --alsologtostderr: (53.260475926s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-094095 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp testdata/cp-test.txt ha-094095:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095:/home/docker/cp-test.txt ha-094095-m02:/home/docker/cp-test_ha-094095_ha-094095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test_ha-094095_ha-094095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095:/home/docker/cp-test.txt ha-094095-m03:/home/docker/cp-test_ha-094095_ha-094095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test_ha-094095_ha-094095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095:/home/docker/cp-test.txt ha-094095-m04:/home/docker/cp-test_ha-094095_ha-094095-m04.txt
E1008 18:01:38.895645  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:38.902032  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:38.913457  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:38.935018  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:38.976467  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:01:39.057962  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test.txt"
E1008 18:01:39.219455  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test_ha-094095_ha-094095-m04.txt"
E1008 18:01:39.541635  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp testdata/cp-test.txt ha-094095-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m02.txt
E1008 18:01:40.183988  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m02:/home/docker/cp-test.txt ha-094095:/home/docker/cp-test_ha-094095-m02_ha-094095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test_ha-094095-m02_ha-094095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m02:/home/docker/cp-test.txt ha-094095-m03:/home/docker/cp-test_ha-094095-m02_ha-094095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test.txt"
E1008 18:01:41.465316  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test_ha-094095-m02_ha-094095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m02:/home/docker/cp-test.txt ha-094095-m04:/home/docker/cp-test_ha-094095-m02_ha-094095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test_ha-094095-m02_ha-094095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp testdata/cp-test.txt ha-094095-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt ha-094095:/home/docker/cp-test_ha-094095-m03_ha-094095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test_ha-094095-m03_ha-094095.txt"
E1008 18:01:44.026696  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt ha-094095-m02:/home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test_ha-094095-m03_ha-094095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m03:/home/docker/cp-test.txt ha-094095-m04:/home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test_ha-094095-m03_ha-094095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp testdata/cp-test.txt ha-094095-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile781590352/001/cp-test_ha-094095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt ha-094095:/home/docker/cp-test_ha-094095-m04_ha-094095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095 "sudo cat /home/docker/cp-test_ha-094095-m04_ha-094095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt ha-094095-m02:/home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m02 "sudo cat /home/docker/cp-test_ha-094095-m04_ha-094095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 cp ha-094095-m04:/home/docker/cp-test.txt ha-094095-m03:/home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 ssh -n ha-094095-m03 "sudo cat /home/docker/cp-test_ha-094095-m04_ha-094095-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (241.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-094095 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1008 18:21:38.896667  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-094095 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m0.198012973s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (241.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-094095 --control-plane -v=7 --alsologtostderr
E1008 18:25:51.765125  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-094095 --control-plane -v=7 --alsologtostderr: (1m10.008960646s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-094095 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-547390 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1008 18:26:38.897553  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-547390 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.048285032s)
--- PASS: TestJSONOutput/start/Command (53.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-547390 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-547390 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-547390 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-547390 --output=json --user=testUser: (7.329667902s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-880265 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-880265 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.45587ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c4f2d81c-0774-45a0-ab4b-710b85592c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-880265] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e57ec0e-e22c-4bf4-91cb-43cb135592a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19774"}}
	{"specversion":"1.0","id":"ebc10892-f2aa-45f3-87f0-40e6c8aafceb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5bf97a7e-92b8-4aa2-b05e-351bf786cbb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig"}}
	{"specversion":"1.0","id":"343ea04f-944c-4ede-af95-aa1395b70730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube"}}
	{"specversion":"1.0","id":"70e81975-83b2-4f2f-a4c3-d95f85656eab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1aadbccb-d3a4-41a6-99a4-91db16f477c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8c22357-0205-4d8d-b933-0cc96f7432fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-880265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-880265
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-057770 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-057770 --driver=kvm2  --container-runtime=crio: (42.298451416s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-070950 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-070950 --driver=kvm2  --container-runtime=crio: (40.605851538s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-057770
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-070950
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-070950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-070950
helpers_test.go:175: Cleaning up "first-057770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-057770
--- PASS: TestMinikubeProfile (85.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-085348 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1008 18:28:54.832796  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-085348 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.347818547s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085348 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085348 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-104490 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-104490 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.14649728s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-104490 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-104490 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-085348 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-104490 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-104490 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-104490
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-104490: (1.278675246s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-104490
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-104490: (21.392469753s)
--- PASS: TestMountStart/serial/RestartStopped (22.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-104490 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-104490 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-255508 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1008 18:30:51.765091  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
E1008 18:31:38.896262  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-255508 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.23446263s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-255508 -- rollout status deployment/busybox: (3.437541493s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-fxhf4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-rcwtb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-fxhf4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-rcwtb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-fxhf4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-rcwtb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-fxhf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-fxhf4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-rcwtb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-255508 -- exec busybox-7dff88458-rcwtb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-255508 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-255508 -v 3 --alsologtostderr: (47.467188921s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-255508 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp testdata/cp-test.txt multinode-255508:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile543339778/001/cp-test_multinode-255508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508:/home/docker/cp-test.txt multinode-255508-m02:/home/docker/cp-test_multinode-255508_multinode-255508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m02 "sudo cat /home/docker/cp-test_multinode-255508_multinode-255508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508:/home/docker/cp-test.txt multinode-255508-m03:/home/docker/cp-test_multinode-255508_multinode-255508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m03 "sudo cat /home/docker/cp-test_multinode-255508_multinode-255508-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp testdata/cp-test.txt multinode-255508-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile543339778/001/cp-test_multinode-255508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt multinode-255508:/home/docker/cp-test_multinode-255508-m02_multinode-255508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508 "sudo cat /home/docker/cp-test_multinode-255508-m02_multinode-255508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508-m02:/home/docker/cp-test.txt multinode-255508-m03:/home/docker/cp-test_multinode-255508-m02_multinode-255508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m03 "sudo cat /home/docker/cp-test_multinode-255508-m02_multinode-255508-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp testdata/cp-test.txt multinode-255508-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile543339778/001/cp-test_multinode-255508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt multinode-255508:/home/docker/cp-test_multinode-255508-m03_multinode-255508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508 "sudo cat /home/docker/cp-test_multinode-255508-m03_multinode-255508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 cp multinode-255508-m03:/home/docker/cp-test.txt multinode-255508-m02:/home/docker/cp-test_multinode-255508-m03_multinode-255508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 ssh -n multinode-255508-m02 "sudo cat /home/docker/cp-test_multinode-255508-m03_multinode-255508-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 node stop m03: (1.481068143s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-255508 status: exit status 7 (405.62774ms)

                                                
                                                
-- stdout --
	multinode-255508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-255508-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-255508-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr: exit status 7 (413.163523ms)

                                                
                                                
-- stdout --
	multinode-255508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-255508-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-255508-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 18:32:59.371264  567140 out.go:345] Setting OutFile to fd 1 ...
	I1008 18:32:59.371392  567140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:32:59.371403  567140 out.go:358] Setting ErrFile to fd 2...
	I1008 18:32:59.371410  567140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 18:32:59.371622  567140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19774-529764/.minikube/bin
	I1008 18:32:59.371807  567140 out.go:352] Setting JSON to false
	I1008 18:32:59.371836  567140 mustload.go:65] Loading cluster: multinode-255508
	I1008 18:32:59.371915  567140 notify.go:220] Checking for updates...
	I1008 18:32:59.372339  567140 config.go:182] Loaded profile config "multinode-255508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1008 18:32:59.372368  567140 status.go:174] checking status of multinode-255508 ...
	I1008 18:32:59.372915  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.372965  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.388566  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35255
	I1008 18:32:59.389072  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.389652  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.389680  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.390066  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.390282  567140 main.go:141] libmachine: (multinode-255508) Calling .GetState
	I1008 18:32:59.391668  567140 status.go:371] multinode-255508 host status = "Running" (err=<nil>)
	I1008 18:32:59.391686  567140 host.go:66] Checking if "multinode-255508" exists ...
	I1008 18:32:59.391954  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.391993  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.407445  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I1008 18:32:59.407800  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.408262  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.408285  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.408633  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.408835  567140 main.go:141] libmachine: (multinode-255508) Calling .GetIP
	I1008 18:32:59.411495  567140 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:32:59.411882  567140 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:32:59.411903  567140 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:32:59.412004  567140 host.go:66] Checking if "multinode-255508" exists ...
	I1008 18:32:59.412273  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.412308  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.427198  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I1008 18:32:59.427695  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.428116  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.428128  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.428467  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.428642  567140 main.go:141] libmachine: (multinode-255508) Calling .DriverName
	I1008 18:32:59.428863  567140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:32:59.428887  567140 main.go:141] libmachine: (multinode-255508) Calling .GetSSHHostname
	I1008 18:32:59.431406  567140 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:32:59.431815  567140 main.go:141] libmachine: (multinode-255508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:00:de", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:30:20 +0000 UTC Type:0 Mac:52:54:00:e5:00:de Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-255508 Clientid:01:52:54:00:e5:00:de}
	I1008 18:32:59.431848  567140 main.go:141] libmachine: (multinode-255508) DBG | domain multinode-255508 has defined IP address 192.168.39.43 and MAC address 52:54:00:e5:00:de in network mk-multinode-255508
	I1008 18:32:59.431985  567140 main.go:141] libmachine: (multinode-255508) Calling .GetSSHPort
	I1008 18:32:59.432144  567140 main.go:141] libmachine: (multinode-255508) Calling .GetSSHKeyPath
	I1008 18:32:59.432292  567140 main.go:141] libmachine: (multinode-255508) Calling .GetSSHUsername
	I1008 18:32:59.432398  567140 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508/id_rsa Username:docker}
	I1008 18:32:59.510015  567140 ssh_runner.go:195] Run: systemctl --version
	I1008 18:32:59.515515  567140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:32:59.530405  567140 kubeconfig.go:125] found "multinode-255508" server: "https://192.168.39.43:8443"
	I1008 18:32:59.530448  567140 api_server.go:166] Checking apiserver status ...
	I1008 18:32:59.530485  567140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 18:32:59.544316  567140 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W1008 18:32:59.553630  567140 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 18:32:59.553676  567140 ssh_runner.go:195] Run: ls
	I1008 18:32:59.557966  567140 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1008 18:32:59.561997  567140 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1008 18:32:59.562017  567140 status.go:463] multinode-255508 apiserver status = Running (err=<nil>)
	I1008 18:32:59.562026  567140 status.go:176] multinode-255508 status: &{Name:multinode-255508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:32:59.562041  567140 status.go:174] checking status of multinode-255508-m02 ...
	I1008 18:32:59.562338  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.562369  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.578119  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45605
	I1008 18:32:59.578576  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.579079  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.579103  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.579413  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.579607  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .GetState
	I1008 18:32:59.581108  567140 status.go:371] multinode-255508-m02 host status = "Running" (err=<nil>)
	I1008 18:32:59.581123  567140 host.go:66] Checking if "multinode-255508-m02" exists ...
	I1008 18:32:59.581383  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.581414  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.596380  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I1008 18:32:59.596820  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.597264  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.597284  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.597618  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.597780  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .GetIP
	I1008 18:32:59.600494  567140 main.go:141] libmachine: (multinode-255508-m02) DBG | domain multinode-255508-m02 has defined MAC address 52:54:00:ae:ca:c3 in network mk-multinode-255508
	I1008 18:32:59.600955  567140 main.go:141] libmachine: (multinode-255508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ca:c3", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:31:23 +0000 UTC Type:0 Mac:52:54:00:ae:ca:c3 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-255508-m02 Clientid:01:52:54:00:ae:ca:c3}
	I1008 18:32:59.600982  567140 main.go:141] libmachine: (multinode-255508-m02) DBG | domain multinode-255508-m02 has defined IP address 192.168.39.35 and MAC address 52:54:00:ae:ca:c3 in network mk-multinode-255508
	I1008 18:32:59.601168  567140 host.go:66] Checking if "multinode-255508-m02" exists ...
	I1008 18:32:59.601459  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.601494  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.615912  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I1008 18:32:59.616313  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.616771  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.616801  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.617060  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.617241  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .DriverName
	I1008 18:32:59.617406  567140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 18:32:59.617426  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .GetSSHHostname
	I1008 18:32:59.620097  567140 main.go:141] libmachine: (multinode-255508-m02) DBG | domain multinode-255508-m02 has defined MAC address 52:54:00:ae:ca:c3 in network mk-multinode-255508
	I1008 18:32:59.620473  567140 main.go:141] libmachine: (multinode-255508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:ca:c3", ip: ""} in network mk-multinode-255508: {Iface:virbr1 ExpiryTime:2024-10-08 19:31:23 +0000 UTC Type:0 Mac:52:54:00:ae:ca:c3 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-255508-m02 Clientid:01:52:54:00:ae:ca:c3}
	I1008 18:32:59.620496  567140 main.go:141] libmachine: (multinode-255508-m02) DBG | domain multinode-255508-m02 has defined IP address 192.168.39.35 and MAC address 52:54:00:ae:ca:c3 in network mk-multinode-255508
	I1008 18:32:59.620592  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .GetSSHPort
	I1008 18:32:59.620775  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .GetSSHKeyPath
	I1008 18:32:59.620922  567140 main.go:141] libmachine: (multinode-255508-m02) Calling .GetSSHUsername
	I1008 18:32:59.621151  567140 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19774-529764/.minikube/machines/multinode-255508-m02/id_rsa Username:docker}
	I1008 18:32:59.701760  567140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 18:32:59.717146  567140 status.go:176] multinode-255508-m02 status: &{Name:multinode-255508-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1008 18:32:59.717178  567140 status.go:174] checking status of multinode-255508-m03 ...
	I1008 18:32:59.717506  567140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 18:32:59.717548  567140 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 18:32:59.733154  567140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I1008 18:32:59.733645  567140 main.go:141] libmachine: () Calling .GetVersion
	I1008 18:32:59.734093  567140 main.go:141] libmachine: Using API Version  1
	I1008 18:32:59.734112  567140 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 18:32:59.734485  567140 main.go:141] libmachine: () Calling .GetMachineName
	I1008 18:32:59.734681  567140 main.go:141] libmachine: (multinode-255508-m03) Calling .GetState
	I1008 18:32:59.736297  567140 status.go:371] multinode-255508-m03 host status = "Stopped" (err=<nil>)
	I1008 18:32:59.736310  567140 status.go:384] host is not running, skipping remaining checks
	I1008 18:32:59.736315  567140 status.go:176] multinode-255508-m03 status: &{Name:multinode-255508-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 node start m03 -v=7 --alsologtostderr: (35.775108248s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-255508 node delete m03: (1.609004781s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-255508 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1008 18:41:38.897540  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-255508 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.535185428s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-255508 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-255508
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-255508-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-255508-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (62.693174ms)

                                                
                                                
-- stdout --
	* [multinode-255508-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-255508-m02' is duplicated with machine name 'multinode-255508-m02' in profile 'multinode-255508'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-255508-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-255508-m03 --driver=kvm2  --container-runtime=crio: (43.852551379s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-255508
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-255508: exit status 80 (210.221137ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-255508 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-255508-m03 already exists in multinode-255508-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-255508-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-255508-m03: (1.008838244s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.18s)

                                                
                                    
x
+
TestScheduledStopUnix (113.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-010854 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-010854 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.70892664s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010854 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-010854 -n scheduled-stop-010854
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010854 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1008 18:50:21.296355  537013 retry.go:31] will retry after 67.438µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.297521  537013 retry.go:31] will retry after 102.849µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.298644  537013 retry.go:31] will retry after 117.801µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.299772  537013 retry.go:31] will retry after 234.631µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.300900  537013 retry.go:31] will retry after 533.378µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.302034  537013 retry.go:31] will retry after 465.432µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.303160  537013 retry.go:31] will retry after 628.991µs: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.304290  537013 retry.go:31] will retry after 2.213677ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.307443  537013 retry.go:31] will retry after 3.175459ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.311656  537013 retry.go:31] will retry after 4.811087ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.316885  537013 retry.go:31] will retry after 8.418512ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.326109  537013 retry.go:31] will retry after 9.049973ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.335283  537013 retry.go:31] will retry after 11.136539ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.347549  537013 retry.go:31] will retry after 13.796035ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
I1008 18:50:21.361756  537013 retry.go:31] will retry after 25.282818ms: open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/scheduled-stop-010854/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010854 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-010854 -n scheduled-stop-010854
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-010854
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-010854 --schedule 15s
E1008 18:50:51.766301  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1008 18:51:21.962902  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-010854
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-010854: exit status 7 (67.559239ms)

                                                
                                                
-- stdout --
	scheduled-stop-010854
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-010854 -n scheduled-stop-010854
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-010854 -n scheduled-stop-010854: exit status 7 (67.382982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-010854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-010854
--- PASS: TestScheduledStopUnix (113.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (160.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.429309355 start -p running-upgrade-390529 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.429309355 start -p running-upgrade-390529 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.712026437s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-390529 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-390529 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.346798139s)
helpers_test.go:175: Cleaning up "running-upgrade-390529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-390529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-390529: (1.219082577s)
--- PASS: TestRunningBinaryUpgrade (160.73s)

                                                
                                    
x
+
TestPause/serial/Start (106.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-078692 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-078692 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.987296126s)
--- PASS: TestPause/serial/Start (106.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (176.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2024931756 start -p stopped-upgrade-204592 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1008 18:51:38.896177  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/functional-922806/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2024931756 start -p stopped-upgrade-204592 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.282672827s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2024931756 -p stopped-upgrade-204592 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2024931756 -p stopped-upgrade-204592 stop: (2.121626973s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-204592 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-204592 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.846805816s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (176.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-204592
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038693 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-038693 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (64.595566ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-038693] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19774-529764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19774-529764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (65.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038693 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038693 --driver=kvm2  --container-runtime=crio: (1m4.852850814s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-038693 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (65.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038693 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1008 18:55:51.764710  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038693 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.648743667s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-038693 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-038693 status -o json: exit status 2 (261.214138ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-038693","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-038693
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-038693: (1.069517085s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038693 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038693 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.102065824s)
--- PASS: TestNoKubernetes/serial/Start (24.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-038693 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-038693 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.300546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.573741266s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.071330205s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-038693
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-038693: (1.31971606s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (58.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-038693 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-038693 --driver=kvm2  --container-runtime=crio: (58.353546822s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (58.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-038693 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-038693 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.760226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-966632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-966632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m41.04799596s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-783146 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-783146 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m27.681296279s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-966632 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b00109c3-21fa-4966-b312-8aabc0302e65] Pending
helpers_test.go:344: "busybox" [b00109c3-21fa-4966-b312-8aabc0302e65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b00109c3-21fa-4966-b312-8aabc0302e65] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003553296s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-966632 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-966632 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-966632 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-142496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1008 19:00:51.764839  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/addons-738106/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-142496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (53.330878082s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-783146 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f] Pending
helpers_test.go:344: "busybox" [bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bf19e3f3-89f0-4f0e-baf5-c65138d8ce0f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004268275s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-783146 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [91e9b6c2-db63-4809-8a71-a8ff6b938182] Pending
helpers_test.go:344: "busybox" [91e9b6c2-db63-4809-8a71-a8ff6b938182] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [91e9b6c2-db63-4809-8a71-a8ff6b938182] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003372733s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-783146 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-783146 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-142496 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-142496 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (646.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-966632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-966632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m46.076575295s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966632 -n no-preload-966632
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (646.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (519.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-783146 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-783146 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m39.40601726s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783146 -n embed-certs-783146
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (519.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (551.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-142496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-142496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m11.106549744s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142496 -n default-k8s-diff-port-142496
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (551.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-256554 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-256554 --alsologtostderr -v=3: (4.283593864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-256554 -n old-k8s-version-256554: exit status 7 (71.341949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-256554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-602180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-602180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (48.83291877s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-602180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-602180 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-602180 --alsologtostderr -v=3: (7.490858519s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-602180 -n newest-cni-602180
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-602180 -n newest-cni-602180: exit status 7 (76.721284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-602180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-602180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-602180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (37.925884603s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-602180 -n newest-cni-602180
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-602180 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-602180 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-602180 --alsologtostderr -v=1: (1.869439022s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-602180 -n newest-cni-602180
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-602180 -n newest-cni-602180: exit status 2 (423.574212ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-602180 -n newest-cni-602180
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-602180 -n newest-cni-602180: exit status 2 (363.817254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-602180 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-602180 --alsologtostderr -v=1: (1.186561667s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-602180 -n newest-cni-602180
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-602180 -n newest-cni-602180
E1008 19:29:41.888786  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:29:41.895573  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:29:41.907006  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:29:41.928565  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:29:41.970386  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.crt: no such file or directory" logger="UnhandledError"
E1008 19:29:42.052070  537013 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19774-529764/.minikube/profiles/no-preload-966632/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.83s)

                                                
                                    

Test skip (35/265)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-738106 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-076496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-076496
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard